The Download: a history of brainwashing, and America’s chipmaking ambitions

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology

A brief, weird history of brainwashing

On a spring day in 1959, war correspondent Edward Hunter testified before a US Senate subcommittee investigating “the effect of Red China Communes on the United States.”

Hunter discussed a new concept to the American public: a supposedly scientific system for changing people’s minds, even making them love things they once hated.

Much of it was baseless, but Hunter’s sensational tales still became an important part of the disinformation and pseudoscience that fueled a “mind-control race” during the Cold War. US officials prepared themselves for a psychic war with the Soviet Union and China by spending millions of dollars on research into manipulating the human brain.

But while the science never exactly panned out, residual beliefs fostered by this bizarre conflict continue to play a role in ideological and scientific debates to this day. Read the full story.

—Annalee Newitz

This US startup makes a crucial chip material and is taking on a Japanese giant

It can be dizzying to try to understand all the complex components of a single computer chip: layers of microscopic components linked to one another through highways of copper wires. 

Zooming in further, there’s one particular type of insulating material placed between the chip and the structure beneath it; this material, called dielectric film, is produced in sheets as thin as white blood cells.

For 30 years, a single Japanese company called Ajinomoto has made billions producing this particular film. Competitors have struggled to outdo them, and today Ajinomoto’s products are used in everything from laptops to data centers. 

Now, a startup based in Berkeley, California, is embarking on a herculean effort to dethrone Ajinomoto and bring this small slice of the chipmaking supply chain back to the US. But success is far from guaranteed. Read the full story.

—James O’Donnell

The effort to make a breakthrough cancer therapy cheaper

CAR-T therapies, created by engineering a patient’s own cells to fight cancer, are typically reserved for people who have exhausted other treatment options. But last week, the FDA approved Carvykti, a CAR-T product for multiple myeloma, as a second-line therapy. That means people are eligible to receive Carvykti after their first relapse.

While this means some multiple myeloma patients in the US will now get earlier access to CAR-T, the vast majority of patients around the globe still won’t get CAR-T at all. These therapies are expensive—half a million dollars in some cases. But do they have to be? Read the full story.

—Cassandra Willyard

This story is from The Checkup, our weekly health and biotech newsletter. Sign up to receive it in your inbox every Thursday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Humane’s AI Pin struggles with the most basic tasks
Which means it’s seriously unlikely to replace a smartphone any time soon. (NYT $)
+ The device needs to nail the fundamentals before it can be genuinely useful. (The Verge)
+ It seems to have a pretty severe overheating problem, too. (WP $)

2 China is pushing American chipmakers out of its telecoms systems
It’s confident its locally-produced chips are adequate replacements. (WSJ $)
+ How ASML took over the chipmaking chessboard. (MIT Technology Review)

3 OpenAI has reportedly fired two researchers for leaking
But for leaking what, we do not know. (The Information $)
+ Now we know what OpenAI’s superalignment team has been up to. (MIT Technology Review)

4 Repairing your iPhone might be about to get cheaper
At long last, Apple has approved used parts to fix devices. (WP $)
+ But the policy only applies to the iPhone 15. (NYT $)
+ The announcement coincides with Colorado considering a right-to-repair bill. (404 Media)

5 AI data centers have a serious overheating problem
A Japanese ceramics company thinks it has the answer. (FT $)

6 We could be nearing a turning point for geothermal energy
Tapping into the systems is expensive and complicated. But new projects are making headway. (Knowable Magazine)
+ Underground thermal energy networks are becoming crucial to the US’s energy future. (MIT Technology Review)

7 The US Space Force is preparing for the first military exercise in orbit
In which a spacecraft will chase down a satellite, before swapping roles. (Ars Technica)
+ An exploding star released the brightest-ever burst of light in 2022. (BBC)
+ The first-ever mission to pull a dead rocket out of space has just begun. (MIT Technology Review)

8 You shouldn’t rely on TikTok for tax advice
You almost definitely can’t claim your pet as a work expense. (The Guardian)
+ You probably shouldn’t trust virtual influencers either. (The Information $)

9 San Francisco’s Metro system still runs on floppy discs 💾
And it still works just fine—for now. (Wired $)

10 Dyson’s AR app highlights all the dusty spots you’ve missed
If you think your home is clean, think again. (The Verge)

Quote of the day

“Murphy’s law states that ‘anything that can go wrong will go wrong.’ That pretty much sums up my first three days with Humane’s Ai Pin.”

—Journalist Raymond Wong expresses his frustration at trying to get Humane’s Ai Pin, a device touted as the future of mobile computing, to do pretty much anything, Inverse reports.

The big story

Inside NASA’s bid to make spacecraft as small as possible

October 2023

Since the 1970s, we’ve sent a lot of big things to Mars. But when NASA successfully sent twin Mars Cube One spacecraft, the size of cereal boxes, to the red planet in November 2018, it was the first time we’d ever sent something so small.

Just making it this far heralded a new age in space exploration. NASA and the community of planetary science researchers caught a glimpse of a future long sought: a pathway to much more affordable space exploration using smaller, cheaper spacecraft. Read the full story.

—David W. Brown

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ In adorable news: a science teacher hosted dozens of his former pupils after he promised them they’d watch the eclipse together all the way back in 1978.
+ Congratulations to Trigger, a guide dog who fathered so many guide puppies (more than 300!), he’s been given the nickname the Dogfather.
+ We’re all getting older, so we may as well embrace it.
+ These hyraxes love tea so much, they could become honorary UK citizens. ☕

The effort to make a breakthrough cancer therapy cheaper

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

CAR-T therapies, created by engineering a patient’s own cells to fight cancer, are typically reserved for people who have exhausted other treatment options. But last week, the FDA approved Carvykti, a CAR-T product for multiple myeloma, as a second-line therapy. That means people are eligible to receive Carvykti after their first relapse.

While this means some multiple myeloma patients in the US will now get earlier access to CAR-T, the vast majority of patients around the globe still won’t get CAR-T at all. These therapies are expensive—half a million dollars in some cases. But do they have to be?

Today, let’s take a look at efforts to make CAR-T cheaper and more accessible.

It’s not hard to see why CAR-T comes with a high price tag. Creating these therapies is a multistep process. First doctors harvest T cells from the patient. Those cells are then engineered outside the body using a viral vector, which inserts an artificial gene that codes for a chimeric antigen receptor, or CAR. That receptor enables the cells to identify cancer cells and flag them for destruction. The cells must then be grown in the lab until they number in the millions. Meanwhile, the patient has to undergo chemotherapy to destroy any remaining T cells and make space for the CAR-T cells. The engineered cells are then reintroduced into the patient’s body, where they become living, cancer-fighting drugs. It’s a high-tech and laborious process.

In the US, CAR-T brings in big money. The therapies are priced between $300,000 and $600,000, but some estimates put the true cost—covering hospital time, the care required to manage adverse reactions, and more—at more than a million dollars in some cases.  

One way to cut costs is to produce the therapy in countries where drug development and manufacturing is significantly cheaper. In March, India approved its first homegrown CAR-T therapy, NexCAR19. It’s produced by a small biotech called ImmunoACT, based in Mumbai. The Indian CAR-T therapy costs roughly a tenth of what US products sell for: between $30,000 and $50,000. “It lights a little fire under all of us to look at the cost of making CAR-T cells, even in places like the United States,” says Terry Fry, a pediatric hematologist at the University of Colorado Anschutz Medical Campus.  

That lower cost is due to a variety of factors. Labor is cheaper in India, where the drug was developed and tested and is now manufactured. The company also saved money by manufacturing its own viral vectors, one of the most expensive line items in the manufacturing process.

Another way to curb costs is to produce the therapies in the medical centers where they’re delivered. Although cancer centers are in charge of collecting T cells from their patients, they typically don’t produce the CAR-T therapies themselves. Instead they ship the cells to pharma companies, which have specialized facilities for engineering and growing the cells. Then the company ships the therapy back. But producing these therapies in house—a model called point-of-care manufacturing—could save money and reduce wait times. One hospital in Barcelona made and tested its own CAR-T therapy and now provides it to patients for $97,000, a fraction of what the name-brand medicines cost.

In Brazil, the Oswaldo Cruz Foundation, a vaccine manufacturer and the largest biomedical research institute in Latin America, recently partnered with a US-based nonprofit called Caring Cross to help develop local CAR-T manufacturing capabilities. Caring Cross has developed a point-of-care manufacturing process able to generate CAR-T therapies for an even lower cost—roughly $20,000 in materials and $10,000 in labor and facilities.

It’s an attractive model. Demand for CAR-T often outstrips supply, leading to long wait times. “There is a growing tension around the limited access that we’re seeing for cell and gene therapies coming out of biotech,” Stanford pediatric oncologist Crystal Mackall told Stat. “It’s incredibly tempting to say, ‘Well, why don’t you just let me make it for my patients?’”

Even these treatments run in the tens of thousands of dollars, partly because approved CAR-T products are bespoke therapies, each one produced for a particular patient. But many companies are also working on off-the-shelf CAR-T therapies. In some cases, that means engineering T cells from healthy donors. Some of those therapies are already in clinical trials. 

In other cases, companies are working to engineer cells inside the body. That process should make it much, much simpler and cheaper to deliver CAR-T. With conventional CAR-T therapies, patients have to undergo chemotherapy to destroy their existing T cells. But with in vivo CAR-T, this step isn’t necessary. And because these therapies don’t require any cell manipulation outside the patient’s body, “you could take it in an outpatient clinic,” says Priya Karmali, chief technology officer at Capstan Therapeutics, which is developing in vivo CAR-T therapies. “You wouldn’t need specialized centers.”

Some in vivo strategies, just like the ex vivo strategies, rely on viral vectors. Umoja Biopharma’s platform uses a viral vector but also employs a second technology to prompt the engineered cells to survive and expand in the presence of the drug rapamycin. Last fall, the company reported that it had successfully generated in vivo CAR-T cells in nonhuman primates.

At Capstan Therapeutics, researchers are taking a different tack, using lipid nanoparticles to ferry mRNA into T cells. When a viral vector places the CAR gene into a cell’s DNA, the change is permanent. But with mRNA, the CAR operates for only a limited time. “Once the war is over, you don’t want the soldiers lurking around forever,” Karmali says.

And with CAR-T, there are plenty of potential battlefields to conquer. CAR-T therapies are already showing promise beyond blood cancers. Earlier this year, researchers reported stunning results in 15 patients with lupus and other autoimmune diseases. CAR-T is also being tested as a treatment for solid tumors, heart disease, aging, HIV infection, and more. As the number of people eligible for CAR-T therapies increases, so will the pressure to reduce the cost.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

Scientists are finally making headway in moving CAR-T into solid tumors. Last fall I wrote about the barriers and the progress

In the early days of CAR-T, Emily Mullin reported on patient deaths that called the safety of the treatment into question. 

Travel back in time to relive the excitement over the approval of the first CAR-T therapy with this story by Emily Mullin. 

From around the web

The Arizona Supreme Court ruled that an 1864 law banning nearly all abortions can be enforced after a 14-day grace period. (NBC)

Drug shortages are worse than they have been in more than two decades. Pain meds, chemo drugs, and ADHD medicines are all in short supply. Here’s why. (Stat)

England became the fifth European country to begin limiting children’s access to gender treatments such as puberty blockers and hormone therapy. Proponents of the restrictions say there is little evidence that these therapies help young people with gender dysphoria. (NYT

Last week I wrote about an outbreak of bird flu in cows. A new study finds that birds in New York City are also carrying the virus. The researchers found H5N1 in geese in the Bronx, a chicken in Manhattan, a red-tailed hawk in Queens, and a goose and a peregrine falcon in Brooklyn. (NYT)

A brief, weird history of brainwashing

On an early spring day in 1959, Edward Hunter testified before a US Senate subcommittee investigating “the effect of Red China Communes on the United States.” It was the kind of opportunity he relished. A war correspondent who had spent considerable time in Asia, Hunter had achieved brief media stardom in 1951 after his book Brain-Washing in Red China introduced a new concept to the American public: a supposedly scientific system for changing people’s minds, even making them love things they once hated. 

But Hunter wasn’t just a reporter, objectively chronicling conditions in China. As he told the assembled senators, he was also an anticommunist activist who served as a propagandist for the OSS, or Office of Strategic Services—something that was considered normal and patriotic at the time. His reporting blurred the line between fact and political mythology.

portrait of Liang Qichao
Chinese reformists like Liang Qichao used the term xinao—a play on an older word, xixin, or “washing the heart”—in an attempt to bring ideas from Western science into Chinese philosophy
WIKIMEDIA COMMONS

When a senator asked about Hunter’s work for the OSS, the operative boasted that he was the first to “discover the technique of mind-attack” in mainland China, the first to use the word “brainwashing” in writing in any language, and “the first, except for the Chinese, to use the word in speech in any language.” 

None of this was true. Other operatives associated with the OSS had used the word in reports before Hunter published articles about it. More important, as the University of Hong Kong legal scholar Ryan Mitchell has pointed out, the Chinese word Hunter used at the hearing—xinao (), translated as “wash brain”—has a long history going back to scientifically minded Chinese philosophers of the late 19th century, who used it to mean something more akin to enlightenment. 

Yet Hunter’s sensational tales still became an important part of the disinformation and pseudoscience that fueled a “mind-control race” during the Cold War, much like the space race. Inspired by new studies on brain function, the US military and intelligence communities prepared themselves for a psychic war with the Soviet Union and China by spending millions of dollars on research into manipulating the human brain. But while the science never exactly panned out, residual beliefs fostered by this bizarre conflict continue to play a role in ideological and scientific debates to this day.

Coercive persuasion and pseudoscience

Ironically, “brainwashing” was not a widely used term among communists in China. The word xinao, Mitchell told me in an email, is actually a play on an older word, xixin, or washing the heart, which alludes to a Confucian and Buddhist ideal of self-awareness. In the late 1800s, Chinese reformists such as Liang Qichao began using xinao—replacing the character for “heart” with “brain”—in part because they were trying to modernize Chinese philosophy. “They were eager to receive and internalize as much as they could of Western science in general, and discourse about the brain as the seat of consciousness was just one aspect of that set of imported ideas,” Mitchell said. 

For Liang and his circle, brainwashing wasn’t some kind of mind-wiping process. “It was a sort of notion of epistemic virtue,” Mitchell said, “or a personal duty to make oneself modern in order to behave properly in the modern world.”

Meanwhile, scientists outside China were investigating “brainwashing” in the sense we usually think of, with experiments into mind clearing and reprogramming. Some of the earliest research into the possibility began in the 1890s, when Ivan Pavlov, the Russian physiologist who had famously conditioned dogs to drool at the sound of a bell, worked on Soviet-funded projects to investigate how trauma could change animal behavior. He found that even the most well-conditioned dogs would forget their training after intensely stressful experiences such as nearly drowning, especially when those were combined with sleep deprivation and isolation. It seemed that Pavlov had hit upon a quick way to wipe animals’ memories. Scientists on both sides of the Iron Curtain subsequently wondered whether it might work on humans. And once memories were wiped, they wondered, could something else be installed their place? 

During the 1949 show trial of the Hungarian anticommunist József Mindszenty, American officials worried that the Russians might have found the answer. A Catholic cardinal, Mindszenty had protested several government policies of the newly formed, Soviet-backed Hungarian People’s Republic. He was arrested and tortured, and he eventually made a series of outlandish confessions at trial: that he had conspired to steal the Hungarian crown jewels, start World War III, and make himself ruler of the world. In his book Dark Persuasion, Joel Dimsdale, a psychiatry professor at the University of California, San Diego, argues that the US intelligence community saw these implausible claims as confirmation that the Soviets had made some kind of scientific breakthrough that allowed them to control the human mind through coercive persuasion.

This question became more urgent when, in 1953, a handful of American POWs in China and Korea switched sides, and a Marine named Frank Schwable was quoted on Chinese radio validating the communist claim that the US was testing germ warfare in Asia. By this time, Hunter had already published a book about brainwashing in China, so the Western public quickly gravitated toward his explanation that the prisoners had been brainwashed, just like Mindszenty. People were terrified, and this was a reassuring explanation for how nice American GIs could go Red. 

cover of "Brainwashing: The true and terrible story of the men who endured and defied  the most diabolical red torture." by Edward Hunter
Edward Hunter, who claimed to have coined the term “brainwashing,” wrote a book that fueled paranoia about a “mind-control race” during the Cold War.
a pamphlet cover of "Brain-Washing: A Synthesis of the Russian Textbook on Psychopolitics"
A pamphlet published in 1955, purported to be a translation of a work by the Russian secret police, claimed that the Soviets used drugs and psychology to control the masses and that Dianetics, a pseudoscience invented by Scientology founder L. Ron Hubbard, could prevent brainwashing.

Over the following years, in the wake of the Korean War, “brainwashing” grew into a catchall explanation for any kind of radical or nonconformist behavior in the United States. Social scientists and politicians alike latched onto the idea. The Dutch psychologist Joost Meerloo warned that television was a brainwashing machine, for example, and the anticommunist educator J. Merrill Root claimed that high schools brainwashed kids into being weak-willed and vulnerable to communist influence. Meanwhile, popular movies like 1962’s The Manchurian Candidate, starring Frank Sinatra, offered thrilling tales of Chinese communists whose advanced psychological techniques turned unsuspecting American POWs into assassins. 

For the military and intelligence communities, mind control hovered between myth and science. Nowhere is this more obvious than in the peculiar case of an anonymously published 1955 pamphlet called Brain-Washing: A Synthesis of the Russian Textbook on Psychopolitics, which purported to be a translation of work by the Soviet secret-police chief Lavrentiy Beria. Full of wild claims about how the Soviets used psychology and drugs to control the masses, the pamphlet has a peculiar section devoted to the ways that Dianetics—a pseudoscience invented by the founder of Scientology, L. Ron Hubbard—could prevent brainwashing. As a result, it is widely believed that Hubbard himself wrote the pamphlet as black propaganda, or propaganda that masquerades as something produced by a foreign adversary. 

""
The 1962 film The Manchurian Candidate, starring Frank Sinatra, offered thrilling tales of Chinese communists whose advanced psychological techniques turned unsuspecting American POWs into assassins.
ALAMY

Still, US officials apparently took it seriously. David Seed, a cultural studies scholar at the University of Liverpool, plumbed the National Security Council papers at the Dwight D. Eisenhower Library, where he discovered that the NSC’s Operations Coordinating Board had analyzed the pamphlet as part of an investigation into enemy capabilities. A member of the board wrote that it might be “fake” but contained so much accurate information that it was clearly written by “experts.” When it came to brainwashing, government operatives made almost no distinction between black propaganda and so-called expertise.

This gobbledygook may also have struck the NSC investigator as legitimate because Hubbard borrowed lingo from the same sources as many scientists of the era. Hubbard chose the name Dianetics, for instance, specifically to evoke the computer scientist Norbert Wiener’s idea of cybernetics, an influential theory about information control systems that heavily informed both psychology and the burgeoning field of artificial intelligence. Cybernetics suggested that the brain functioned like a machine, with inputs and outputs, feedback and control. And if machines could be optimized, then why not brains?

An excuse for government abuse 

The fantasy of brainwashing was always one of optimization. Military experts knew that adversaries could be broken with torture, but it took months and was often a violent, messy process. A fast, scientifically informed interrogation method would save time and could potentially be deployed on a mass scale. In 1953, that dream led the CIA to invest millions of dollars in MK-Ultra, a project that injected cash into university and research programs devoted to memory wiping, mind control, and “truth serum” drugs. Worried that their rivals in the Soviet Union and China were controlling people’s minds to spread communism throughout the world, the intelligence community was willing to try almost anything to fight back. No operation was too weird. 

One of MK-Ultra’s most notorious projects was “Operation Midnight Climax” in San Francisco, where sex workers lured random American men to a safe house and dosed them with LSD while CIA agents covertly observed their behavior. At McGill University in Montreal, the CIA funded the work of the psychologist Donald Cameron, who used a combination of drugs and electroconvulsive therapy on patients with mental illness, attempting to erase and “repattern” their minds. Though many of his victims did wind up suffering from amnesia for years, Cameron never successfully injected new thoughts or memories. Marcia Holmes, a science historian who researched brainwashing for the Hidden Persuaders project at Birkbeck, University of London, told me that the CIA used Cameron’s data to develop new kinds of torture, which the US adopted as  “enhanced interrogation” techniques in the wake of 9/11. “You could put a scientific spin on it and claim that’s why it worked,” she said. “But it always boiled down to medieval tactics that people knew from experience worked.”

Schwable
Believed to be a victim of communist mind control, the American POW Frank Schwable claimed on Chinese radio in 1953 that the US was testing germ warfare in Asia.
József Mindszenty
After being arrested and tortured, the Catholic cardinal and anticommunist József Mindszenty made outlandish confessions at trial, like that he had conspired to steal the Hungarian crown jewels.

MK-Ultra remained secret until the mid-1970s, when the US Senate Select Committee to Study Governmental Operations with Respect to Intelligence Activities, commonly known as the Church Committee after its chair, Senator Frank Church, opened hearings into the long-­running project. The shocking revelations that the CIA was drugging American citizens and paying for the torment of vulnerable Canadians changed the public’s understanding of mind control. “Brainwashing” came to seem less like a legitimate threat from overseas enemies and more like a ruse or excuse for almost any kind of bad behavior. When Patty Hearst, granddaughter of the newspaper publisher William Randolph Hearst, was put on trial in 1976 for robbing a bank after being kidnapped by the Symbionese Liberation Army, an American militant organization, the judge refused to believe experts who testified that she had been tortured and brainwashed by her captors. She was convicted and spent 22 months in jail. This marked the end of the nation’s infatuation with brainwashing, and experts began to debunk the idea that there was a scientific basis for mind control.

Patty Hearts against a red flag
In publishing heiress Patty Hearst’s 1976 trial for bank robbery, the judge refused to believe that she had been brainwashed as a victim of kidnapping.
GIFT OF TIME MAGAZINE

Still, the revelations about MK-Ultra led to new cultural myths. Communists were no longer the baddies—instead, people feared that the US government was trying to experiment on its citizens. Soon after the Church Committee hearings were over, the media was gripped by a crime story of epic proportions: nearly two dozen Black children had been murdered in Atlanta, and the police had no leads other than a vague idea that maybe it could be a serial killer. Wayne Williams, a Black man who was eventually convicted of two of the murders, claimed at various points that he had been trained by the CIA. This led to popular conspiracy theories that MK-Ultra had been experimenting on Black people in Atlanta.

Colin Dickey, author of Under the Eye of Power: How Fear of Secret Societies Shapes American Democracy, told me these conspiracy theories became “a way of making sense of an otherwise mystifying and terrifying reality, [which is that America is] a country where Black people are so disenfranchised that their murders aren’t noticed.” Dickey added that this MK-Ultra conspiracy theory “gave a shape to systemic racism,” placing blame for the Atlanta child murders on the US government. In the process, it also suggested that Black people had been brainwashed to kill each other. 

No evidence ever surfaced that MK-Ultra was behind the children’s deaths, but the idea of brainwashing continues to be a powerful metaphor for the effects of systemic racism. It haunts contemporary Black horror films like Get Out, where white people take over Black people’s bodies through a fantastical version of hypnosis. And it provides the analytical substrate for the scathing indictment of racist marketing in the book Brainwashed: Challenging the Myth of Black Inferiority, by the Black advertising executive Tom Burrell. He argues that advertising has systematically pushed stereotypes of Black people as second-class citizens, instilling a “slave mindset” in Black audiences.

A social and political phenomenon

Today, even as the idea of brainwashing is often dismissed as pseudoscience, Americans are still spellbound by the idea that people we disagree with have been psychologically captured by our enemies. Right-wing pundits and politicians often attribute discussions of racism to infections by a “woke mind virus”—an idea that is a direct descendant of Cold War panics over communist brainwashing. Meanwhile, contemporary psychology researchers like UCSD’s Dimsdale fear that social media is now a vector for coercive persuasion, just as Meerloo worried about television’s mind-control powers in the 1950s. 

Cutting-edge technology is also altering how we think about mind control. In a 2017 open letter published in Nature, an international group of researchers and ethicists warned that neurotechnologies like brain-computer interfaces “mean that we are on a path to a world in which it will be possible to decode people’s mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions and decisions.” It sounds like MK-Ultra’s wish list. Hoping to head off a neuro-dystopia, the group outlined several key ways that companies and universities could guard against coercive uses of this technology in the future. They suggested that we need laws to prevent companies from spying on people’s private thoughts, for example, as well as regulations that bar anyone from using brain implants to change people’s personalities or make them more neurotypical. 

Many neuroscientists feel that these concerns are overblown; one of them, the University of Maryland cognitive scientist R. Douglas Fields, summed up the naysayers’ position with a column in Quanta magazine arguing that the brain is more plastic than we realize, and that neurotech mind control will never be as simple as throwing a switch. Kathleen Taylor, another neuroscientist who studies brainwashing, takes a more measured view; in her book Brainwashing: The Science of Thought Control, she acknowledges that neurotech and drugs could change people’s thought processes but ultimately concludes that “brainwashing is above all a social and political phenomenon.” 

Sydney Gottleib
Sidney Gottlieb was an American chemist and spymaster who in the 1950s headed the Central Intelligence Agency’s mind-control program known as Project MK-Ultra.
COURTESY OF THE CIA

Perhaps that means the anonymous National Security Council examiner was right to call Hubbard’s black propaganda the work of an “expert.” If brainwashing is politics, then disinformation might be as effective (or ineffective) as a brain implant in changing someone’s mind. Still, scholars have learned that political efforts at mind control do not have predictable results. Online disinformation leads to what Juliette Kayyem, a former assistant secretary of the Department of Homeland Security, identifies as stochastic terrorism, or acts of violence that cannot be predicted precisely but can be analyzed statistically. She writes that stochastic terrorism is inspired by online rhetoric that demonizes groups of people, but it’s hard to know which people consuming that rhetoric will actually become terrorists, and which of them will just rage at their computer screens—the result of coercive persuasion that works on some targets and misses others. 

American operatives may never have found the perfect system for brainwashing foreign adversaries or unsuspecting citizens, but the US managed to win the mind-control wars in one small way. Mitchell, the legal scholar at Hong Kong University, told me that the American definition of brainwashing, or xinao, is now the dominant way the word is used in modern Chinese speech. “People refer to aggressive advertising campaigns or earworm pop songs as having a xinao effect,” he said. The Chinese government, Mitchell added, uses the term exactly the way the US military did back in the 1950s. State media, for example, “described many Hong Kong protesters in 2019 as having undergone xinao by the West.”

Annalee Newitz is the author of Stories Are Weapons: Psychological Warfare and the American Mind, coming in June 2024.

This US startup makes a crucial chip material and is taking on a Japanese giant

It can be dizzying to try to understand all the complex components of a single computer chip: layers of microscopic components linked to one another through highways of copper wires, some barely wider than a few strands of DNA. Nestled between those wires is an insulating material called a dielectric, ensuring that the wires don’t touch and short out. Zooming in further, there’s one particular dielectric placed between the chip and the structure beneath it; this material, called dielectric film, is produced in sheets as thin as white blood cells. 

For 30 years, a single Japanese company called Ajinomoto has made billions producing this particular film. Competitors have struggled to outdo them, and today Ajinomoto has more than 90% of the market in the product, which is used in everything from laptops to data centers. 

But now, a startup based in Berkeley, California, is embarking on a herculean effort to dethrone Ajinomoto and bring this small slice of the chipmaking supply chain back to the US.

Thintronics is promising a product purpose-built for the computing demands of the AI era—a suite of new materials that the company claims have higher insulating properties and, if adopted, could mean data centers with faster computing speeds and lower energy costs. 

The company is at the forefront of a coming wave of new US-based companies, spurred by the $280 billion CHIPS and Science Act, that is seeking to carve out a portion of the semiconductor sector, which has become dominated by just a handful of international players. But to succeed, Thintronics and its peers will have to overcome a web of challenges—solving technical problems, disrupting long-standing industry relationships, and persuading global semiconductor titans to accommodate new suppliers. 

“Inventing new materials platforms and getting them into the world is very difficult,” Thintronics founder and CEO Stefan Pastine says. It is “not for the faint of heart.”

The insulator bottleneck

If you recognize the name Ajinomoto, you’re probably surprised to hear it plays a critical role in the chip sector: the company is better known as the world’s leading supplier of MSG seasoning powder. In the 1990s, Ajinomoto discovered that a by-product of MSG made a great insulator, and it has enjoyed a near monopoly in the niche material ever since. 

But Ajinomoto doesn’t make any of the other parts that go into chips. In fact, the insulating materials in chips rely on dispersed supply chains: one layer uses materials from Ajinomoto, another uses material from another company, and so on, with none of the layers optimized to work in tandem. The resulting system works okay when data is being transmitted over short paths, but over longer distances, like between chips, weak insulators act as a bottleneck, wasting energy and slowing down computing speeds. That’s recently become a growing concern, especially as the scale of AI training gets more expensive and consumes eye-popping amounts of energy. (Ajinomoto did not respond to requests for comment.) 

None of this made much sense to Pastine, a chemist who sold his previous company, which specialized in recycling hard plastics, to an industrial chemicals company in 2019. Around that time, he started to believe that the chemicals industry could be slow to innovate, and he thought the same pattern was keeping chipmakers from finding better insulating materials. In the chip industry, he says, insulators have “kind of been looked at as the redheaded stepchild”—they haven’t seen the progress made with transistors and other chip components. 

He launched Thintronics that same year, with the hope that cracking the code on a better insulator could provide data centers with faster computing speeds at lower costs. That idea wasn’t groundbreaking—new insulators are constantly being researched and deployed—but Pastine believed that he could find the right chemistry to deliver a breakthrough. 

Thintronics says it will manufacture different insulators for all layers of the chip, for a system designed to swap into existing manufacturing lines. Pastine tells me the materials are now being tested with a number of industry players. But he declined to provide names, citing nondisclosure agreements, and similarly would not share details of the formula. 

Without more details, it’s hard to say exactly how well the Thintronics materials compare with competing products. The company recently tested its materials’ Dk values, which are a measure of how effective an insulator a material is. Venky Sundaram, a researcher who has founded multiple semiconductor startups but is not involved with Thintronics, reviewed the results. When compared to other build-up films—the dielectric category in which Thintronics is competing—their most impressive Dk values are better than those of any other material available today, he says.

A rocky road ahead

Thintronics’ vision has already garnered some support. The company received a $20 million Series A funding round in March, led by venture capital firms Translink and Maverick, as well as a grant from the US National Science Foundation. 

The company is also seeking funding from the CHIPS Act. Signed into law by President Joe Biden in 2022, it’s designed to boost companies like Thintronics in order to bring semiconductor manufacturing back to American companies and reduce reliance on foreign suppliers. A year after it became law, the administration said that more than 450 companies had submitted statements of interest to receive CHIPS funding for work across the sector. 

The bulk of funding from the legislation is destined for large-scale manufacturing facilities, like those operated by Intel in New Mexico and Taiwan Semiconductor Manufacturing Corporation (TSMC) in Arizona. But US Secretary of Commerce Gina Raimondo has said she’d like to see smaller companies receive funding as well, especially in the materials space. In February, applications opened for a pool of $300 million earmarked specifically for materials innovation. While Thintronics declined to say how much funding it was seeking or from which programs, the company does see the CHIPS Act as a major tailwind.

But building a domestic supply chain for chips—a product that currently depends on dozens of companies around the globe—will mean reversing decades of specialization by different countries. And industry experts say it will be difficult to challenge today’s dominant insulator suppliers, who have often had to adapt to fend off new competition. 

“Ajinomoto has been a 90-plus-percent-market-share material for more than two decades,” says Sundaram. “This is unheard-of in most businesses, and you can imagine they didn’t get there by not changing.”

One big challenge is that the dominant manufacturers have decades-long relationships with chip designers like Nvidia or Advanced Micro Devices, and with manufacturers like TSMC. Asking these players to swap out materials is a big deal.

“The semiconductor industry is very conservative,” says Larry Zhao, a semiconductor researcher who has worked in the dielectrics industry for more than 25 years. “They like to use the vendors they already know very well, where they know the quality.” 

Another obstacle facing Thintronics is technical: insulating materials, like other chip components, are held to manufacturing standards so precise they are difficult to comprehend. The layers where Ajinomoto dominates are thinner than a human hair. The material must also be able to accept tiny holes, which house wires running vertically through the film. Every new iteration is a massive R&D effort in which incumbent companies have the upper hand given their years of experience, says Sundaram.

If all this is completed successfully in a lab, yet another hurdle lies ahead: the material has to retain those properties in a high-volume manufacturing facility, which is where Sundaram has seen past efforts fail.

“I have advised several material suppliers over the years that tried to break into [Ajinomoto’s] business and couldn’t succeed,” he says. “They all ended up having the problem of not being as easy to use in a high-volume production line.” 

Despite all these challenges, one thing may be working in Thintronics’ favor: US-based tech giants like Microsoft and Meta are making headway in designing their own chips for the first time. The plan is to use these chips for in-house AI training as well as for the cloud computing capacity that they rent out to customers, both of which would reduce the industry’s reliance on Nvidia. 

Though Microsoft, Google, and Meta declined to comment on whether they are pursuing advancements in materials like insulators, Sundaram says these firms could be more willing to work with new US startups rather than defaulting to the old ways of making chips: “They have a lot more of an open mind about supply chains than the existing big guys.”

This story was updated on April 12 to clarify how the Dk values of Thintronics’ materials compare to those of other build-up films.

The Download: AI is making robots more helpful, and the problem with cleaning up pollution

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Is robotics about to have its own ChatGPT moment?

Henry and Jane Evans are used to awkward houseguests. For more than a decade, the couple, who live in Los Altos Hills, California, have hosted a slew of robots in their home.

In 2002, at age 40, Henry had a massive stroke, which left him with quadriplegia and an inability to speak. While they’ve experimented with many advanced robotic prototypes in a bid to give Henry more autonomy, it’s one recent model that works in tandem with AI models that has made the biggest changes—helping to brush his hair, and opening up his relationship with his granddaughter.

A new generation of scientists and inventors believes that the previously missing ingredient of AI can give robots the ability to learn new skills and adapt to new environments faster than ever before. This new approach, just maybe, can finally bring robots out of the factory and into our homes. Read the full story.

—Melissa Heikkilä

Melissa’s story is from the next magazine issue of MIT Technology Review, set to go live on April 24, on the theme of Build. If you don’t subscribe already, sign up now to get a copy when it lands.

The inadvertent geoengineering experiment that the world is now shutting off

The news: When we talk about climate change, the focus is usually on the role that greenhouse-gas emissions play in driving up global temperatures, and rightly so. But another important, less-known phenomenon is also heating up the planet: reductions in other types of pollution.

In a nutshell: In particular, the world’s power plants, factories, and ships are pumping much less sulfur dioxide into the air, thanks to an increasingly strict set of global pollution regulations. Sulfur dioxide creates aerosol particles in the atmosphere that can directly reflect sunlight back into space or act as the “condensation nuclei” around which cloud droplets form. More or thicker clouds, in turn, also cast away more sunlight. So when we clean up pollution, we also ease this cooling effect.  

Why it matters: Cutting air pollution has unequivocally saved lives. But as the world rapidly warms, it’s critical to understand the impact of pollution-fighting regulations on the global thermostat as well. Read the full story.

—James Temple

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Election workers are worried about AI 
Generative models could make it easier for election deniers to spam offices. (Wired $)
+ Eric Schmidt has a 6-point plan for fighting election misinformation. (MIT Technology Review)

2 Apple has warned users in 92 countries of mercenary spyware attacks
It said it had high confidence that the targets were at genuine risk. (TechCrunch)

3 The US is in desperate need of chip engineers
Without them, it can’t meet its lofty semiconductor production goals. (WSJ $)
+ Taiwanese chipmakers are looking to expand overseas. (FT $)
+ How ASML took over the chipmaking chessboard. (MIT Technology Review)

4 Meet the chatbot tutors
Tens of thousands of gig economy workers are training tomorrow’s models. (NYT $)
+ Adobe is paying photographers $120 per video to train its generator. (Bloomberg $)
+ The next wave of AI coding tools is emerging. (IEEE Spectrum)
+ The people paid to train AI are outsourcing their work… to AI. (MIT Technology Review)

5 The Middle East is rushing to build AI infrastructure
Both Saudi Arabia and the UAE see sprawling data centers as key to becoming the region’s AI superpower. (Bloomberg $)

6 Political content creators and activists are lobbying Meta
They claim the company’s decision to limit the reach of ‘political’ content is threatening their livelihoods. (WP $)

7 The European Space Agency is planning an artificial solar eclipse
The mission, due to launch later this year, should provide essential insight into the sun’s atmosphere. (IEEE Spectrum)

8 How AI is helping to recover Ireland’s marginalized voices
Starting with the dung queen of Dublin. (The Guardian)
+ How AI is helping historians better understand our past. (MIT Technology Review)

9 Video game history is vanishing before our eyes
As consoles fall out of use, their games are consigned to history too. (FT $)

10 Dating apps are struggling to make looking for love fun
Charging users seems counterintuitive, then. (The Atlantic $)
+ Here’s how the net’s newest matchmakers help you find love. (MIT Technology Review)

Quote of the day

“We’re women sharing cool things with each other directly. You want it to go back to men running QVC?”

—Micah Enriquez, a successful ‘cleanfluencer,’ who shares cleaning tips and processes with her followers, feels criticism leveled at such content creators has a sexist element, she tells New York Magazine.

The big story

Is it possible to really understand someone else’s mind?

November 2023

Technically speaking, neuroscientists have been able to read your mind for decades. It’s not easy, mind you. First, you must lie motionless within a hulking fMRI scanner, perhaps for hours, while you watch films or listen to audiobooks.

None of this, of course, can be done without your consent; for the foreseeable future, your thoughts will remain your own, if you so choose. But if you do elect to endure claustrophobic hours in the scanner, the software will learn to generate a bespoke reconstruction of what you were seeing or listening to, just by analyzing how blood moves through your brain.

More recently, researchers have deployed generative AI tools, like Stable Diffusion and GPT, to create far more realistic, if not entirely accurate, reconstructions of films and podcasts based on neural activity.

But as exciting as the idea of extracting a movie from someone’s brain activity may be, it is a highly limited form of “mind reading.” To really experience the world through your eyes, scientists would have to be able to infer not just what film you are watching but also what you think about it, and how it makes you feel. And these interior thoughts and feelings are far more difficult to access. Read the full story.

—Grace Huckins

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Intrepid archaeologists have uncovered beautiful new frescos in the ruins of Pompeii.
+ This doughy jellyfish sure looks tasty.
+ A short rumination on literary muses, from Zelda Fitzgerald to Neal Cassady.
+ Grammar rules are made to be broken.

Is robotics about to have its own ChatGPT moment?

Silent. Rigid. Clumsy.

Henry and Jane Evans are used to awkward houseguests. For more than a decade, the couple, who live in Los Altos Hills, California, have hosted a slew of robots in their home. 

In 2002, at age 40, Henry had a massive stroke, which left him with quadriplegia and an inability to speak. Since then, he’s learned how to communicate by moving his eyes over a letter board, but he is highly reliant on caregivers and his wife, Jane. 

Henry got a glimmer of a different kind of life when he saw Charlie Kemp on CNN in 2010. Kemp, a robotics professor at Georgia Tech, was on TV talking about PR2, a robot developed by the company Willow Garage. PR2 was a massive two-armed machine on wheels that looked like a crude metal butler. Kemp was demonstrating how the robot worked, and talking about his research on how health-care robots could help people. He showed how the PR2 robot could hand some medicine to the television host.    

“All of a sudden, Henry turns to me and says, ‘Why can’t that robot be an extension of my body?’ And I said, ‘Why not?’” Jane says. 

There was a solid reason why not. While engineers have made great progress in getting robots to work in tightly controlled environments like labs and factories, the home has proved difficult to design for. Out in the real, messy world, furniture and floor plans differ wildly; children and pets can jump in a robot’s way; and clothes that need folding come in different shapes, colors, and sizes. Managing such unpredictable settings and varied conditions has been beyond the capabilities of even the most advanced robot prototypes. 

That seems to finally be changing, in large part thanks to artificial intelligence. For decades, roboticists have more or less focused on controlling robots’ “bodies”—their arms, legs, levers, wheels, and the like—via purpose-­driven software. But a new generation of scientists and inventors believes that the previously missing ingredient of AI can give robots the ability to learn new skills and adapt to new environments faster than ever before. This new approach, just maybe, can finally bring robots out of the factory and into our homes. 

Progress won’t happen overnight, though, as the Evanses know far too well from their many years of using various robot prototypes. 

PR2 was the first robot they brought in, and it opened entirely new skills for Henry. It would hold a beard shaver and Henry would move his face against it, allowing him to shave and scratch an itch by himself for the first time in a decade. But at 450 pounds (200 kilograms) or so and $400,000, the robot was difficult to have around. “It could easily take out a wall in your house,” Jane says. “I wasn’t a big fan.”

More recently, the Evanses have been testing out a smaller robot called Stretch, which Kemp developed through his startup Hello Robot. The first iteration launched during the pandemic with a much more reasonable price tag of around $18,000. 

Stretch weighs about 50 pounds. It has a small mobile base, a stick with a camera dangling off it, and an adjustable arm featuring a gripper with suction cups at the ends. It can be controlled with a console controller. Henry controls Stretch using a laptop, with a tool that that tracks his head movements to move a cursor around. He is able to move his thumb and index finger enough to click a computer mouse. Last summer, Stretch was with the couple for more than a month, and Henry says it gave him a whole new level of autonomy. “It was practical, and I could see using it every day,” he says. 

a robot arm holds a brush over the head of Henry Evans which rests on a pillow
Henry Evans used the Stretch robot to brush his hair, eat, and even play with his granddaughter.
PETER ADAMS

Using his laptop, he could get the robot to brush his hair and have it hold fruit kebabs for him to snack on. It also opened up Henry’s relationship with his granddaughter Teddie. Before, they barely interacted. “She didn’t hug him at all goodbye. Nothing like that,” Jane says. But “Papa Wheelie” and Teddie used Stretch to play, engaging in relay races, bowling, and magnetic fishing. 

Stretch doesn’t have much in the way of smarts: it comes with some pre­installed software, such as the web interface that Henry uses to control it, and other capabilities such as AI-enabled navigation. The main benefit of Stretch is that people can plug in their own AI models and use them to do experiments. But it offers a glimpse of what a world with useful home robots could look like. Robots that can do many of the things humans do in the home—tasks such as folding laundry, cooking meals, and cleaning—have been a dream of robotics research since the inception of the field in the 1950s. For a long time, it’s been just that: “Robotics is full of dreamers,” says Kemp.

But the field is at an inflection point, says Ken Goldberg, a robotics professor at the University of California, Berkeley. Previous efforts to build a useful home robot, he says, have emphatically failed to meet the expectations set by popular culture—think the robotic maid from The Jetsons. Now things are very different. Thanks to cheap hardware like Stretch, along with efforts to collect and share data and advances in generative AI, robots are getting more competent and helpful faster than ever before. “We’re at a point where we’re very close to getting capability that is really going to be useful,” Goldberg says. 

Folding laundry, cooking shrimp, wiping surfaces, unloading shopping baskets—today’s AI-powered robots are learning to do tasks that for their predecessors would have been extremely difficult. 

Missing pieces

There’s a well-known observation among roboticists: What is hard for humans is easy for machines, and what is easy for humans is hard for machines. Called Moravec’s paradox, it was first articulated in the 1980s by Hans Moravec, thena roboticist at the Robotics Institute of Carnegie Mellon University. A robot can play chess or hold an object still for hours on end with no problem. Tying a shoelace, catching a ball, or having a conversation is another matter. 

There are three reasons for this, says Goldberg. First, robots lack precise control and coordination. Second, their understanding of the surrounding world is limited because they are reliant on cameras and sensors to perceive it. Third, they lack an innate sense of practical physics. 

“Pick up a hammer, and it will probably fall out of your gripper, unless you grab it near the heavy part. But you don’t know that if you just look at it, unless you know how hammers work,” Goldberg says. 

On top of these basic considerations, there are many other technical things that need to be just right, from motors to cameras to Wi-Fi connections, and hardware can be prohibitively expensive. 

Mechanically, we’ve been able to do fairly complex things for a while. In a video from 1957, two large robotic arms are dexterous enough to pinch a cigarette, place it in the mouth of a woman at a typewriter, and reapply her lipstick. But the intelligence and the spatial awareness of that robot came from the person who was operating it. 

""
In a video from 1957, a man operates two large robotic arms and uses the machine to apply a woman’s lipstick. Robots have come a long way since.
“LIGHTER SIDE OF THE NEWS –ATOMIC ROBOT A HANDY GUY” (1957) VIA YOUTUBE

“The missing piece is: How do we get software to do [these things] automatically?” says Deepak Pathak, an assistant professor of computer science at Carnegie Mellon.  

Researchers training robots have traditionally approached this problem by planning everything the robot does in excruciating detail. Robotics giant Boston Dynamics used this approach when it developed its boogying and parkouring humanoid robot Atlas. Cameras and computer vision are used to identify objects and scenes. Researchers then use that data to make models that can be used to predict with extreme precision what will happen if a robot moves a certain way. Using these models, roboticists plan the motions of their machines by writing a very specific list of actions for them to take. The engineers then test these motions in the laboratory many times and tweak them to perfection. 

This approach has its limits. Robots trained like this are strictly choreographed to work in one specific setting. Take them out of the laboratory and into an unfamiliar location, and they are likely to topple over. 

Compared with other fields, such as computer vision, robotics has been in the dark ages, Pathak says. But that might not be the case for much longer, because the field is seeing a big shake-up. Thanks to the AI boom, he says, the focus is now shifting from feats of physical dexterity to building “general-purpose robot brains” in the form of neural networks. Much as the human brain is adaptable and can control different aspects of the human body, these networks can be adapted to work in different robots and different scenarios. Early signs of this work show promising results. 

Robots, meet AI 

For a long time, robotics research was an unforgiving field, plagued by slow progress. At the Robotics Institute at Carnegie Mellon, where Pathak works, he says, “there used to be a saying that if you touch a robot, you add one year to your PhD.” Now, he says, students get exposure to many robots and see results in a matter of weeks.

What separates this new crop of robots is their software. Instead of the traditional painstaking planning and training, roboticists have started using deep learning and neural networks to create systems that learn from their environment on the go and adjust their behavior accordingly. At the same time, new, cheaper hardware, such as off-the-shelf components and robots like Stretch, is making this sort of experimentation more accessible. 

Broadly speaking, there are two popular ways researchers are using AI to train robots. Pathak has been using reinforcement learning, an AI technique that allows systems to improve through trial and error, to get robots to adapt their movements in new environments. This is a technique that Boston Dynamics has also started using  in its robot “dogs” called Spot.

Deepak Pathak’s team at Carnegie Mellon has used an AI technique called reinforcement learning to create a robotic dog that can do extreme parkour with minimal pre-programming.

In 2022, Pathak’s team used this method to create four-legged robot “dogs” capable of scrambling up steps and navigating tricky terrain. The robots were first trained to move around in a general way in a simulator. Then they were set loose in the real world, with a single built-in camera and computer vision software to guide them. Other similar robots rely on tightly prescribed internal maps of the world and cannot navigate beyond them.

Pathak says the team’s approach was inspired by human navigation. Humans receive information about the surrounding world from their eyes, and this helps them instinctively place one foot in front of the other to get around in an appropriate way. Humans don’t typically look down at the ground under their feet when they walk, but a few steps ahead, at a spot where they want to go. Pathak’s team trained its robots to take a similar approach to walking: each one used the camera to look ahead. The robot was then able to memorize what was in front of it for long enough to guide its leg placement. The robots learned about the world in real time, without internal maps, and adjusted their behavior accordingly. At the time, experts told MIT Technology Review the technique was a “breakthrough in robot learning and autonomy” and could allow researchers to build legged robots capable of being deployed in the wild.   

Pathak’s robot dogs have since leveled up. The team’s latest algorithm allows a quadruped robot to do extreme parkour. The robot was again trained to move around in a general way in a simulation. But using reinforcement learning, it was then able to teach itself new skills on the go, such as how to jump long distances, walk on its front legs, and clamber up tall boxes twice its height. These behaviors were not something the researchers programmed. Instead, the robot learned through trial and error and visual input from its front camera. “I didn’t believe it was possible three years ago,” Pathak says. 

In the other popular technique, called imitation learning, models learn to perform tasks by, for example, imitating the actions of a human teleoperating a robot or using a VR headset to collect data on a robot. It’s a technique that has gone in and out of fashion over decades but has recently become more popular with robots that do manipulation tasks, says Russ Tedrake, vice president of robotics research at the Toyota Research Institute and an MIT professor.

By pairing this technique with generative AI, researchers at the Toyota Research Institute, Columbia University, and MIT have been able to quickly teach robots to do many new tasks. They believe they have found a way to extend the technology propelling generative AI from the realm of text, images, and videos into the domain of robot movements. 

The idea is to start with a human, who manually controls the robot to demonstrate behaviors such as whisking eggs or picking up plates. Using a technique called diffusion policy, the robot is then able to use the data fed into it to learn skills. The researchers have taught robots more than 200 skills, such as peeling vegetables and pouring liquids, and say they are working toward teaching 1,000 skills by the end of the year. 

Many others have taken advantage of generative AI as well. Covariant, a robotics startup that spun off from OpenAI’s now-shuttered robotics research unit, has built a multimodal model called RFM-1. It can accept prompts in the form of text, image, video, robot instructions, or measurements. Generative AI allows the robot to both understand instructions and generate images or videos relating to those tasks. 

The Toyota Research Institute team hopes this will one day lead to “large behavior models,” which are analogous to large language models, says Tedrake. “A lot of people think behavior cloning is going to get us to a ChatGPT moment for robotics,” he says. 

In a similar demonstration, earlier this year a team at Stanford managed to use a relatively cheap off-the-shelf robot costing $32,000 to do complex manipulation tasks such as cooking shrimp and cleaning stains. It learned those new skills quickly with AI. 

Called Mobile ALOHA (a loose acronym for “a low-cost open-source hardware teleoperation system”), the robot learned to cook shrimp with the help of just 20 human demonstrations and data from other tasks, such as tearing off a paper towel or piece of tape. The Stanford researchers found that AI can help robots acquire transferable skills: training on one task can improve its performance for others.

While the current generation of generative AI works with images and language, researchers at the Toyota Research Institute, Columbia University, and MIT believe the approach can extend to the domain of robot motion.

This is all laying the groundwork for robots that can be useful in homes. Human needs change over time, and teaching robots to reliably do a wide range of tasks is important, as it will help them adapt to us. That is also crucial to commercialization—first-generation home robots will come with a hefty price tag, and the robots need to have enough useful skills for regular consumers to want to invest in them. 

For a long time, a lot of the robotics community was very skeptical of these kinds of approaches, says Chelsea Finn, an assistant professor of computer science and electrical engineering at Stanford University and an advisor for the Mobile ALOHA project. Finn says that nearly a decade ago, learning-based approaches were rare at robotics conferences and disparaged in the robotics community. “The [natural-language-processing] boom has been convincing more of the community that this approach is really, really powerful,” she says. 

There is one catch, however. In order to imitate new behaviors, the AI models need plenty of data. 

More is more

Unlike chatbots, which can be trained by using billions of data points hoovered from the internet, robots need data specifically created for robots. They need physical demonstrations of how washing machines and fridges are opened, dishes picked up, or laundry folded, says Lerrel Pinto, an assistant professor of computer science at New York University. Right now that data is very scarce, and it takes a long time for humans to collect.

top frame shows a person recording themself opening a kitchen drawer with a grabber, and the bottom shows a robot attempting the same action
“ON BRINGING ROBOTS HOME,” NUR MUHAMMAD (MAHI) SHAFIULLAH, ET AL.

Some researchers are trying to use existing videos of humans doing things to train robots, hoping the machines will be able to copy the actions without the need for physical demonstrations. 

Pinto’s lab has also developed a neat, cheap data collection approach that connects robotic movements to desired actions. Researchers took a reacher-grabber stick, similar to ones used to pick up trash, and attached an iPhone to it. Human volunteers can use this system to film themselves doing household chores, mimicking the robot’s view of the end of its robotic arm. Using this stand-in for Stretch’s robotic arm and an open-source system called DOBB-E, Pinto’s team was able to get a Stretch robot to learn tasks such as pouring from a cup and opening shower curtains with just 20 minutes of iPhone data.  

But for more complex tasks, robots would need even more data and more demonstrations.  

The requisite scale would be hard to reach with DOBB-E, says Pinto, because you’d basically need to persuade every human on Earth to buy the reacher-­grabber system, collect data, and upload it to the internet. 

A new initiative kick-started by Google DeepMind, called the Open X-Embodiment Collaboration, aims to change that. Last year, the company partnered with 34 research labs and about 150 researchers to collect data from 22 different robots, including Hello Robot’s Stretch. The resulting data set, which was published in October 2023, consists of robots demonstrating 527 skills, such as picking, pushing, and moving.  

Sergey Levine, a computer scientist at UC Berkeley who participated in the project, says the goal was to create a “robot internet” by collecting data from labs around the world. This would give researchers access to bigger, more scalable, and more diverse data sets. The deep-learning revolution that led to the generative AI of today started in 2012 with the rise of ImageNet, a vast online data set of images. The Open X-Embodiment Collaboration is an attempt by the robotics community to do something similar for robot data. 

Early signs show that more data is leading to smarter robots. The researchers built two versions of a model for robots, called RT-X, that could be either run locally on individual labs’ computers or accessed via the web. The larger, web-accessible model was pretrained with internet data to develop a “visual common sense,” or a baseline understanding of the world, from the large language and image models. 

When the researchers ran the RT-X model on many different robots, they discovered that the robots were able to learn skills 50% more successfully than in the systems each individual lab was developing.

“I don’t think anybody saw that coming,” says Vincent Vanhoucke, Google DeepMind’s head of robotics. “Suddenly there is a path to basically leveraging all these other sources of data to bring about very intelligent behaviors in robotics.”

Many roboticists think that large vision-language models, which are able to analyze image and language data, might offer robots important hints as to how the surrounding world works, Vanhoucke says. They offer semantic clues about the world and could help robots with reasoning, deducing things, and learning by interpreting images. To test this, researchers took a robot that had been trained on the larger model and asked it to point to a picture of Taylor Swift. The researchers had not shown the robot pictures of Swift, but it was still able to identify the pop star because it had a web-scale understanding of who she was even without photos of her in its data set, says Vanhoucke.

""
RT-2, a recent model for robotic control, was trained on online text and images as well as interactions with the real world.
KELSEY MCCLELLAN

Vanhoucke says Google DeepMind is increasingly using techniques similar to those it would use for machine translation to translate from English to robotics. Last summer, Google introduced a vision-language-­action model called RT-2. This model gets its general understanding of the world from online text and images it has been trained on, as well as its own interactions in the real world. It translates that data into robotic actions. Each robot has a slightly different way of translating English into action, he adds.  

“We increasingly feel like a robot is essentially a chatbot that speaks robotese,” Vanhoucke says. 

Baby steps

Despite the fast pace of development, robots still face many challenges before they can be released into the real world. They are still way too clumsy for regular consumers to justify spending tens of thousands of dollars on them. Robots also still lack the sort of common sense that would allow them to multitask. And they need to move from just picking things up and placing them somewhere to putting things together, says Goldberg—for example, putting a deck of cards or a board game back in its box and then into the games cupboard. 

But to judge from the early results of integrating AI into robots, roboticists are not wasting their time, says Pinto. 

“I feel fairly confident that we will see some semblance of a general-purpose home robot. Now, will it be accessible to the general public? I don’t think so,” he says. “But in terms of raw intelligence, we are already seeing signs right now.” 

Building the next generation of robots might not just assist humans in their everyday chores or help people like Henry Evans live a more independent life. For researchers like Pinto, there is an even bigger goal in sight.

Home robotics offers one of the best benchmarks for human-level machine intelligence, he says. The fact that a human can operate intelligently in the home environment, he adds, means we know this is a level of intelligence that can be reached. 

“It’s something which we can potentially solve. We just don’t know how to solve it,” he says. 

Evans in the foreground with computer screen.  A table with playing cards separates him from two other people in the room
Thanks to Stretch, Henry Evans was able to hold his own playing cards for the first time in two decades.
VY NGUYEN

For Henry and Jane Evans, a big win would be to get a robot that simply works reliably. The Stretch robot that the Evanses experimented with is still too buggy to use without researchers present to troubleshoot, and their home doesn’t always have the dependable Wi-Fi connectivity Henry needs in order to communicate with Stretch using a laptop.

Even so, Henry says, one of the greatest benefits of his experiment with robots has been independence: “All I do is lay in bed, and now I can do things for myself that involve manipulating my physical environment.”

Thanks to Stretch, for the first time in two decades, Henry was able to hold his own playing cards during a match. 

“I kicked everyone’s butt several times,” he says. 

“Okay, let’s not talk too big here,” Jane says, and laughs.

The inadvertent geoengineering experiment that the world is now shutting off

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Usually when we talk about climate change, the focus is squarely on the role that greenhouse-gas emissions play in driving up global temperatures, and rightly so. But another important, less-known phenomenon is also heating up the planet: reductions in other types of pollution.

In particular, the world’s power plants, factories, and ships are pumping much less sulfur dioxide into the air, thanks to an increasingly strict set of global pollution regulations. Sulfur dioxide creates aerosol particles in the atmosphere that can directly reflect sunlight back into space or act as the “condensation nuclei” around which cloud droplets form. More or thicker clouds, in turn, also cast away more sunlight. So when we clean up pollution, we also ease this cooling effect. 

Before we go any further, let me stress: cutting air pollution is smart public policy that has unequivocally saved lives and prevented terrible suffering. 

The fine particulate matter produced by burning coal, gas, wood, and other biomatter is responsible for millions of premature deaths every year through cardiovascular disease, respiratory illnesses, and various forms of cancer, studies consistently show. Sulfur dioxide causes asthma and other respiratory problems, contributes to acid rain, and depletes the protective ozone layer. 

But as the world rapidly warms, it’s critical to understand the impact of pollution-fighting regulations on the global thermostat as well. Scientists have baked the drop-off of this cooling effect into net warming projections for the coming decades, but they’re also striving to obtain a clearer picture of just how big a role declining pollution will play.

A new study found that reductions in emissions of sulfur dioxide and other pollutants are responsible for about 38%, as a middle estimate, of the increased “radiative forcing” observed on the planet between 2001 and 2019. 

An increase in radiative forcing means that more energy is entering the atmosphere than leaving it, as Kerry Emanuel, a professor of atmospheric science at MIT, lays out in a handy explainer here. As that balance has shifted in recent decades, the difference has been absorbed by the oceans and atmosphere, which is what is warming up the planet. 

The remainder of the increase is “mainly” attributable to continued rising emissions of heat-trapping greenhouse gases, says Øivind Hodnebrog, a researcher at the Center for International Climate and Environment Research in Norway and lead author of the paper, which relied on climate models, sea-surface temperature readings, and satellite observations.

The study underscores the fact that as carbon dioxide, methane, and other gases continue to drive up temperature​​s, parallel reductions in air pollution are revealing more of that additional warming, says Zeke Hausfather, a scientist at the independent research organization Berkeley Earth. And it’s happening at a point when, by most accounts, global warming is about to begin accelerating or has already started to do so. (There’s ongoing debate over whether researchers can yet detect that acceleration and whether the world is now warming faster than researchers had expected.)

Because of the cutoff date, the study did not capture a more recent contributor to these trends. Starting in 2020, under new regulations from the International Maritime Organization, commercial shipping vessels have also had to steeply reduce the sulfur content in fuels. Studies have already detected a decrease in the formation of “ship tracks,” or the lines of clouds that often form above busy shipping routes. 

Again, this is a good thing in the most important way: maritime pollution alone is responsible for tens of thousands of early deaths every year. But even so, I have seen and heard of suggestions that perhaps we should slow down or alter the implementation of some of these pollution policies, given the declining cooling effect.

A 2013 study explored one way to potentially balance the harms and benefits. The researchers simulated a scenario in which the maritime industry would be required to use very low-sulfur fuels around coastlines, where the pollution has the biggest effect on mortality and health. But then the vessels would double the fuel’s sulfur content when crossing the open ocean. 

In that hypothetical world, the cooling effect was a bit stronger and premature deaths declined by 69% with respect to figures at the time, delivering a considerable public health improvement. But notably, under a scenario in which low-sulfur fuels were required across the board, mortality declined by 96%, a difference of more than 13,000 preventable deaths every year.

Now that the rules are in place and the industry is running on low-sulfur fuels, intentionally reintroducing pollution over the oceans would be a far more controversial matter.

While society basically accepted for well over a century that ships were inadvertently emitting sulfur dioxide into the air, flipping those emissions back on for the purpose of easing global warming would amount to a form of solar geoengineering, a deliberate effort to tweak the climate system.

Many think such planetary interventions are far too powerful and unpredictable for us to muck around with. And to be sure, this particular approach would be one of the more ineffective, dangerous, and expensive ways to carry out solar geoengineering, if the world ever decided it should be done at all. The far more commonly studied concept is emitting sulfur dioxide high in the stratosphere, where it would persist for longer and, as a bonus, not be inhaled by humans. 

On an episode of the Energy vs. Climate podcast last fall, David Keith, a professor at the University of Chicago who has closely studied the topic, said that it may be possible to slowly implement solar geoengineering in the stratosphere as a means of balancing out the reduced cooling occurring from sulfur dioxide emissions in the troposphere.

“The kind of solar geoengineering ideas that people are talking about seriously would be a thin wedge that would, for example, start replacing what was happening with the added warming we have from unmasking the aerosol cooling from shipping,” he said. 

Positioning the use of solar geoengineering as a means of merely replacing a cruder form that the world was shutting down offers a somewhat different mental framing for the concept—though certainly not one that would address all the deep concerns and fierce criticisms.


Now read the rest of The Spark 

Read more from MIT Technology Review’s archive: 

Back in 2018, I wrote a piece about the maritime rules that were then in the works and the likelihood that they would fuel additional global warming, noting that we were “about to kill a massive, unintentional” experiment in solar geoengineering.

Another thing

Speaking of the concerns about solar geoengineering, late last week I published a deep dive into Harvard’s unsuccessful, decade-long effort to launch a high-altitude balloon to conduct a tiny experiment in the stratosphere. I asked a handful of people who were involved in the project or followed it closely for their insights into what unfolded, the lessons that can be drawn from the episode—and their thoughts on what it means for geoengineering research moving forward.

Keeping up with Climate 

Yup, as the industry predicted (and common sense would suggest), this week’s solar eclipse dramatically cut solar power production across North America. But for the most part, grid operators were able to manage their systems smoothly, minus a few price spikes, thanks in part to a steady buildout of battery banks and the availability of other sources like natural gas and hydropower. (Heatmap)

There’s been a pile-up of bad news for Tesla in recent days. First, the company badly missed analyst expectations for vehicle deliveries during the first quarter. Then, Reuters reported that the EV giant has canceled plans for a low-cost, mass-market car. That may have something to do with the move to “prioritize the development of a robotaxi,” which the Wall Street Journal then wrote about. Over on X, Elon Musk denied the Reuters story, sort ofposting that “Reuters is lying (again).” But there’s a growing sense that his transformation into a “far-right activist” is exacting an increasingly high cost on his personal and business brands. (Wall Street Journal)

In a landmark ruling this week, the European Court of Human Rights determined that by not taking adequate steps to address the dangers of climate change, including increasingly severe heat waves that put the elderly at particular risk, Switzerland had violated the human rights of a group of older Swiss women who had brought a case against the country. Legal experts say the ruling creates a precedent that could unleash many similar cases across Europe. (The Guardian)

The Download: generating AI memories, and China’s softening tech regulation

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Generative AI can turn your most precious memories into photos that never existed

As a six-year-old growing up in Barcelona, Spain, during the 1940s, Maria would visit a neighbor’s apartment in her building when she wanted to see her father. From there, she could try and try to catch a glimpse of him in the prison below, where he was locked up for opposing the dictatorship of Francisco Franco.

There is no photo of Maria on that balcony. But she can now hold something like it: a fake photo—or memory-based reconstruction, as the Barcelona-based design studio Domestic Data Streamers puts it—of the scene that a real photo might have captured.The studio uses generative image models, such as OpenAI’s DALL-E, to bring people’s memories to life.

The fake snapshots are blurred and distorted, but they can still rewind a lifetime in an instant. Read the full story.

—Will Douglas Heaven

Why China’s regulators are softening on its tech sector

Understanding the Chinese government’s decisions to bolster or suppress a certain technology is always a challenge. Why does it favor this sector instead of that one? What triggers officials to suddenly initiate a crackdown? The answers are never easy to come by.

Angela Huyue Zhang, a law professor in Hong Kong, has some suggestions. She spoke with Zeyi Yang, our China reporter, on how Chinese regulators almost always swing back and forth between regulating tech too much and not enough, how local governments have gone to great lengths to protect local tech companies, and why AI companies in China are receiving more government goodwill than other sectors today. Read the full story.

This story is from China Report, our weekly newsletter covering tech in China. Sign up to receive it in your inbox every Tuesday.

+ Read more about Zeyi’s conversation with Zhang and how to apply her insights to AI here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 We need new ways to evaluate how safe an AI model is
Current assessment methods haven’t kept pace with the sector’s rapid development. (FT $)
+ Do AI systems need to come with safety warnings? (MIT Technology Review)

2 Japan has grand hopes of rebuilding its fallen chip industry 
And a small farming town has a critical role to play. (NYT $)
+ Google is working on its own proprietary chips to power its AI. (WSJ $)
+ Intel also unveiled a new chip to rival Nvidia’s stranglehold on the industry. (Reuters)

3 Russia canceled the launch of its newest rocket
The failure means the country is lagging further behind its space rivals in China and the US. (Bloomberg $)
+ The US is retiring one of its most powerful rockets, the Delta IV Heavy. (Ars Technica)

4 Gaming giant Blizzard is returning to China
After hashing out a new deal with long-time partner NetEase. (WSJ $)

5 Volkswagen converted a former Golf factory to produce all-electric vehicles
Its success suggests that other factories could follow suit without major job losses. (NYT $)
+ Three frequently asked questions about EVs, answered. (MIT Technology Review)

6 OpenAI is limbering up to fight numerous lawsuits
By hiring some of the world’s top legal minds to fight claims it breached copyright law. (WP $)
+ AI models that are capable of “reasoning” are on the horizon—if you believe the hype. (FT $)
+ OpenAI’s hunger for data is coming back to bite it. (MIT Technology Review)

7 San Francisco’s marshlands urgently need more mud
A new project is optimistic that dumping sediment onto the bay floor can help. (Hakai Magazine)
+ Why salt marshes could help save Venice. (MIT Technology Review)

8 Scientists are using eDNA to track down soldiers’ remains
Unlike regular DNA, it’s the genetic material we’re all constantly shedding. (Undark Magazine)
+ How environmental DNA is giving scientists a new way to understand our world. (MIT Technology Review)

9 We may have finally solved a cosmic mystery
However, not all cosmologists are in agreement. (New Scientist $)

10 Social media loves angry music 🎧
Extreme emotions require a similarly intense soundtrack. (Wired $)

Quote of the day

“I would urge everyone to think of AI as a sword, not just a shield, when it comes to bad content.”

—Nick Clegg, Meta’s global affairs chief, plays up AI’s ability to prevent the spread of misinformation rather than propagate it, the Guardian reports.

The big story

Inside China’s unexpected quest to protect data privacy

August 2020

In the West, it’s widely believed that neither the Chinese government nor Chinese people care about privacy. In reality, this isn’t true.

Over the last few years the Chinese government has begun to implement privacy protections that in many respects resemble those in America and Europe today.

At the same time, however, it has ramped up state surveillance. This paradox has become a defining feature of China’s emerging data privacy regime, and raises a serious question: Can a system endure with strong protections for consumer privacy, but almost none against government snooping? Read the full story.

—Karen Hao

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ The rare, blind, hairy mole is a sight to behold.
+ When science fiction authors speak, the world listens.
+ Great, now the cat piano is going to be stuck in my head all day.
+ How to speak to just about anyone—should you want to, that is.