The Download: the future of AI moviemaking, and what to know about plug-in hybrids

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What’s next for generative video

When OpenAI revealed its new generative video model, Sora, last month, it invited a handful of filmmakers to try it out. This week the company published the results: seven surreal short films that leave no doubt that the future of generative video is coming fast.

The first batch of models that could turn text into video appeared in late 2022, from companies including Meta, Google, and video-tech startup Runway. It was a neat trick, but the results were grainy, glitchy, and just a few seconds long.

Fast-forward 18 months, and the best of Sora’s high-definition, photorealistic output is so stunning that some breathless observers are predicting the death of Hollywood. But fears of misuse are growing too. Read the full story.

—Will Douglas Heaven

This piece is part of MIT Technology Review’s What’s Next series, looking across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

Interested in learning more about how filmmakers are already using Sora? Check out how three of them are already experimenting with it to create stunning videos—and find out what they told us they believe is coming next.

What to expect if you’re expecting a plug-in hybrid

Plug-in hybrid vehicles should be the mashup that the auto industry needs right now. They can run a short distance on a small battery or take on longer drives with fuel, cutting emissions without asking people to commit to a fully electric vehicle.

But all that freedom can come with a bit of a complication: plug-in hybrids are what drivers make them. That can wind up being a bad thing because people tend to use electric mode less than expected, meaning emissions from the vehicles are higher than anticipated, as I covered in my latest story.


So are you a good match for a plug-in hybrid? Here’s what you should know about the vehicles.

—Casey Crownhart

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Sam Bankman-Fried will be sentenced today 
Undeterred, he’s said to be doling out crypto advice from prison. (Bloomberg $)
+ Attorneys argue he’d commit more fraud if he could. (The Guardian)
+ SBF’s particular brand of effective altruism deserves equal scrutiny. (Wired $)
+ Inside effective altruism, where the far future counts a lot more than the present. (MIT Technology Review)

2 The White House wants federal agencies to check AI for bias  
The policy requires departments to verify that AI tools won’t put Americans at risk. (Wired $)

3 New York City is welcoming robotaxis
But only if they’re accompanied by human safety drivers. (The Verge)
+ What’s next for robotaxis in 2024. (MIT Technology Review)

4 Kate Middleton conspiracy theories are still going 
Conspiracy theorists have convinced themselves her recent video has been AI-manipulated. (WP $)

5 How Palmer Luckey pivoted from VR wunderkind to AI surveillance mogul
He’s selling advanced weapons systems he’s likened to the atomic bomb. (FT $)
+ It’s still an uphill slog for startups to win Pentagon contracts. (The Information $)
+ Why business is booming for military AI startups. (MIT Technology Review)

6 How do your political views compare to a chatbot’s?
AI models’ political leanings matter—particularly when we know so little about how they’re trained. (NYT $)
+ The number of extremists doxxing executives is on the rise. (Bloomberg $)
+ AI language models are rife with different political biases. (MIT Technology Review)

7 Antarctica is melting
But the world’s attention is fixed on the Arctic. (Economist $)
+ How Antarctica’s history of isolation is ending—thanks to Starlink. (MIT Technology Review)

8 Europe’s longest hyperloop test track is now open
Although its top speeds are far from what it’s supposed to be capable of. (The Guardian)

9 Moving home is a colossal pain
But AI tool Yembo could help to take away some of the effort. (IEEE Spectrum)

10 The record for the most accurate clock has been broken
The clock could tick for 40 billion years without making a mistake. (New Scientist $)

Quote of the day

“His life in recent years has been one of unmatched greed and hubris; of ambition and rationalization; and courting risk and gambling repeatedly with other people’s money.”

—The US Attorney’s office in Manhattan, which charged Sam Bankman-Fried in December 2022, criticizes the disgraced founder in a sentencing memorandum, Reuters reports.

The big story

Minneapolis police used fake social media profiles to surveil Black people

April 2022

The Minneapolis Police Department violated civil rights law through a pattern of racist policing practices, according to a damning report by the Minnesota Department of Human Rights. 

The report found that officers stop, search, arrest, and use force against people of color at a much higher rate than white people, and covertly surveilled Black people not suspected of any crimes via social media. 

The findings are consistent with MIT Technology Review’s investigation of Minnesota law enforcement agencies, which has revealed an extensive surveillance network that targeted activists in the aftermath of the murder of George Floyd. Read the full story.

—Tate Ryan-Mosley and Sam Richards

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Take a look at the winners of this year’s World Nature Photography Awards—they’re pretty special.
+ Homemade stock is well worth the effort, apparently.
+ Doctors are warning people in the UK not to eat a whole Easter egg in one go this weekend, but we can’t make any promises.
+ There’s gold in them thar Shropshire hills!

How three filmmakers created Sora’s latest stunning videos

In the last month, a handful of filmmakers have taken Sora for a test drive. The results, which OpenAI published this week, are amazing. The short films are a big jump up even from the cherry-picked demo videos that OpenAI used to tease its new generative model just six weeks ago. Here’s how three of the filmmakers did it.

Air Head” by Shy Kids

Shy Kids is a pop band and filmmaking collective based in Toronto that describes its style as “punk-rock Pixar.” The group has experimented with generative video tech before. Last year it made a music video for one of its songs using an open-source tool called Stable Warpfusion. It’s cool, but low-res and glitchy. The film it made with Sora, called “Air Head,” could pass for real footage—if it didn’t feature a man with a balloon for a face.

One problem with most generative video tools is that it’s hard to maintain consistency across frames. When OpenAI asked Shy Kids to try out Sora, the band wanted to see how far they could push it. “We thought a fun, interesting experiment would be—could we make a consistent character?” says Shy Kids member Walter Woodman. “We think it was mostly successful.”

Generative models can also struggle with anatomical details like hands and faces. But in the video there is a scene showing a train car full of passengers, and the faces are near perfect. “It’s mind-blowing what it can do,” says Woodman. “Those faces on the train were all Sora.”

Has generative video’s problem with faces and hands been solved? Not quite. We still get glimpses of warped body parts. And text is still a problem (in another video, by the creative agency Native Foreign, we see a bike repair shop with the sign “Biycle Repaich”). But everything in “Air Head” is raw output from Sora. After editing together many different clips produced with the tool, Shy Kids did a bunch of post-processing to make the film look even better. They used visual effects tools to fix certain shots of the main character’s balloon face, for example.

Woodman also thinks that the music (which they wrote and performed) and the voice-over (which they also wrote and performed) help to lift the quality of the film even more. Mixing these human touches in with Sora’s output is what makes the film feel alive, says Woodman. “The technology is nothing without you,” he says. “It is a powerful tool, but you are the person driving it.”

[Update: Shy Kids have posted a behind-the-scenes video for Air Head on X. Come for the pro tips, stay for the Sora bloopers: “How do you maintain a character and look consistent even though Sora is a slot machine as to what you get back?” asks Woodman.]

Abstract“ by Paul Trillo

Paul Trillo, an artist and filmmaker, wanted to stretch what Sora could do with the look of a film. His video is a mash-up of retro-style footage with shots of a figure who morphs into a glitter ball and a breakdancing trash man. He says that everything you see is raw output from Sora: “No color correction or post FX.” Even the jump-cut edits in the first part of the film were produced using the generative model.

Trillo felt that the demos that OpenAI put out last month came across too much like clips from video games. “I wanted to see what other aesthetics were possible,” he says. The result is a video that looks like something shot with vintage 16-millimeter film. “It took a fair amount of experimenting, but I stumbled upon a series of prompts that helps make the video feel more organic or filmic,” he says.

Beyond Our Reality” by Don Allen Stevenson

Don Allen Stevenson III is a filmmaker and visual effects artist. He was one of the artists invited by OpenAI to try out DALL-E 2, its text-to-image model, a couple of years ago. Stevenson’s film is a NatGeo-style nature documentary that introduces us to a menagerie of imaginary animals, from the girafflamingo to the eel cat.

In many ways working with text-to-video is like working with text-to-image, says Stevenson. “You enter a text prompt and then you tweak your prompt a bunch of times,” he says. But there’s an added hurdle. When you’re trying out different prompts, Sora produces low-res video. When you hit on something you like, you can then increase the resolution. But going from low to high res is involves another round of generation, and what you liked in the low-res version can be lost.

Sometimes the camera angle is different or the objects in the shot have moved, says Stevenson. Hallucination is still a feature of Sora, as it is in any generative model. With still images this might produce weird visual defects; with video those defects can appear across time as well, with weird jumps between frames.

Stevenson also had to figure out how to speak Sora’s language. It takes prompts very literally, he says. In one experiment he tried to create a shot that zoomed in on a helicopter. Sora produced a clip in which it mixed together a helicopter with a camera’s zoom lens. But Stevenson says that with a lot of creative prompting, Sora is easier to control than previous models.

Even so, he thinks that surprises are part of what makes the technology fun to use: “I like having less control. I like the chaos of it,” he says. There are many other video-making tools that give you control over editing and visual effects. For Stevenson, the point of a generative model like Sora is to come up with strange, unexpected material to work with in the first place.

The clips of the animals were all generated with Sora. Stevenson tried many different prompts until the tool produced something he liked. “I directed it, but it’s more like a nudge,” he says. He then went back and forth, trying out variations.

Stevenson pictured his fox crow having four legs, for example. But Sora gave it two, which worked even better. (It’s not perfect: sharp-eyed viewers will see that at one point in the video the fox crow switches from two legs to four, then back again.) Sora also produced several versions that he thought were too creepy to use.

When he had a collection of animals he really liked, he edited them together. Then he added captions and a voice-over on top. Stevenson could have created his made-up menagerie with existing tools. But it would have taken hours, even days, he says. With Sora the process was far quicker.

“I was trying to think of something that would look cool and experimented with a lot of different characters,” he says. “I have so many clips of random creatures.” Things really clicked when he saw what Sora did with the girafflamingo. “I started thinking: What’s the narrative around this creature? What does it eat, where does it live?” he says. He plans to put out a series of extended films following each of the fantasy animals in more detail.

Stevenson also hopes his fantastical animals will make a bigger point. “There’s going to be a lot of new types of content flooding feeds,” he says. “How are we going to teach people what’s real? In my opinion, one way is to tell stories that are clearly fantasy.”

Stevenson points out that his film could be the first time a lot of people see a video created by a generative model. He wants that first impression to make one thing very clear: This is not real.

What’s next for generative video

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

When OpenAI revealed its new generative video model, Sora, last month, it invited a handful of filmmakers to try it out. This week the company published the results: seven surreal short films that leave no doubt that the future of generative video is coming fast. 

The first batch of models that could turn text into video appeared in late 2022, from companies including Meta, Google, and video-tech startup Runway. It was a neat trick, but the results were grainy, glitchy, and just a few seconds long.

Fast-forward 18 months, and the best of Sora’s high-definition, photorealistic output is so stunning that some breathless observers are predicting the death of Hollywood. Runway’s latest models can produce short clips that rival those made by blockbuster animation studios. Midjourney and Stability AI, the firms behind two of the most popular text-to-image models, are now working on video as well.

A number of companies are racing to make a business on the back of these breakthroughs. Most are figuring out what that business is as they go. “I’ll routinely scream, ‘Holy cow, that is wicked cool’ while playing with these tools,” says Gary Lipkowitz, CEO of Vyond, a firm that provides a point-and-click platform for putting together short animated videos. “But how can you use this at work?”

Whatever the answer to that question, it will probably upend a wide range of businesses and change the roles of many professionals, from animators to advertisers. Fears of misuse are also growing. The widespread ability to generate fake video will make it easier than ever to flood the internet with propaganda and nonconsensual porn. We can see it coming. The problem is, nobody has a good fix.

As we continue to get to grips what’s ahead—good and bad—here are four things to think about. We’ve also curated a selection of the best videos filmmakers have made using this technology, including an exclusive reveal of “Somme Requiem,” an experimental short film by Los Angeles–based production company Myles. Read on for a taste of where AI moviemaking is headed. 

1. Sora is just the start

OpenAI’s Sora is currently head and shoulders above the competition in video generation. But other companies are working hard to catch up. The market is going to get extremely crowded over the next few months as more firms refine their technology and start rolling out Sora’s rivals.

The UK-based startup Haiper came out of stealth this month. It was founded in 2021 by former Google DeepMind and TikTok researchers who wanted to work on technology called neural radiance fields, or NeRF, which can transform 2D images into 3D virtual environments. They thought a tool that turned snapshots into scenes users could step into would be useful for making video games.

But six months ago, Haiper pivoted from virtual environments to video clips, adapting its technology to fit what CEO Yishu Miao believes will be an even bigger market than games. “We realized that video generation was the sweet spot,” says Miao. “There will be a super-high demand for it.”

“Air Head” is a short film made by Shy Kids, a pop band and filmmaking collective based in Toronto, using Sora.

Like OpenAI’s Sora, Haiper’s generative video tech uses a diffusion model to manage the visuals and a transformer (the component in large language models like GPT-4 that makes them so good at predicting what comes next), to manage the consistency between frames. “Videos are sequences of data, and transformers are the best model to learn sequences,” says Miao.

Consistency is a big challenge for generative video and the main reason existing tools produce just a few seconds of video at a time. Transformers for video generation can boost the quality and length of the clips. The downside is that transformers make stuff up, or hallucinate. In text, this is not always obvious. In video, it can result in, say, a person with multiple heads. Keeping transformers on track requires vast silos of training data and warehouses full of computers.

That’s why Irreverent Labs, founded by former Microsoft researchers, is taking a different approach. Like Haiper, Irreverent Labs started out generating environments for games before switching to full video generation. But the company doesn’t want to follow the herd by copying what OpenAI and others are doing. “Because then it’s a battle of compute, a total GPU war,” says David Raskino, Irreverent’s cofounder and CTO. “And there’s only one winner in that scenario, and he wears a leather jacket.” (He’s talking about Jensen Huang, CEO of the trillion-dollar chip giant Nvidia.)

Instead of using a transformer, Irreverent’s tech combines a diffusion model with a model that predicts what’s in the next frame on the basis of common-sense physics, such as how a ball bounces or how water splashes on the floor. Raskino says this approach reduces both training costs and the number of hallucinations. The model still produces glitches, but they are distortions of physics (like a bouncing ball not following a smooth curve, for example) with known mathematical fixes that can be applied to the video after it is generated, he says.

Which approach will last remains to be seen. Miao compares today’s technology to large language models circa GPT-2. Five years ago, OpenAI’s groundbreaking early model amazed people because it showed what was possible. But it took several more years for the technology to become a game-changer.

It’s the same with video, says Miao: “We’re all at the bottom of the mountain.”

2. What will people do with generative video? 

Video is the medium of the internet. YouTube, TikTok, newsreels, ads: expect to see synthetic video popping up everywhere there’s video already.

The marketing industry is one of the most enthusiastic adopters of generative technology. Two-thirds of marketing professionals have experimented with generative AI in their jobs, according to a recent survey Adobe carried out in the US, with more than half saying they have used the technology to produce images.

Generative video is next. A few marketing firms have already put out short films to demonstrate the technology’s potential. The latest example is the 2.5-minute-long “Somme Requiem,” made by Myles. You can watch the film below in an exclusive reveal from MIT Technology Review.

“Somme Requiem” is a short film made by Los Angeles production company Myles. Every shot was generated using Runway’s Gen 2 model. The clips were then edited together by a team of video editors at Myles.

“Somme Requiem” depicts snowbound soldiers during the World War I Christmas ceasefire in 1914. The film is made up of dozens of different shots that were produced using a generative video model from Runway, then stitched together, color-corrected, and set to music by human video editors at Myles. “The future of storytelling will be a hybrid workflow,” says founder and CEO Josh Kahn.

Kahn picked the period wartime setting to make a point. He notes that the Apple TV+ series Masters of the Air, which follows a group of World War II airmen, cost $250 million. The team behind Peter Jackson’s World War I documentary They Shall Not Grow Old spent four years curating and restoring more than 100 hours of archival film. “Most filmmakers can only dream of ever having an opportunity to tell a story in this genre,” says Kahn.

“Independent filmmaking has been kind of dying,” he adds. “I think this will create an incredible resurgence.”

Raskino hopes so. “The horror movie genre is where people test new things, to try new things until they break,” he says. “I think we’re going to see a blockbuster horror movie created by, like, four people in a basement somewhere using AI.”

So is generative video a Hollywood-killer? Not yet. The scene-setting shots in ”Somme Requiem”—empty woods, a desolate military camp—look great. But the people in it are still afflicted with mangled fingers and distorted faces, hallmarks of the technology. Generative video is best at wide-angle pans or lingering close-ups, which creates an eerie atmosphere but little action. If ”Somme Requiem” were any longer it would get dull.

But scene-setting shots pop up all the time in feature-length movies. Most are just a few seconds long, but they can take hours to film. Raskino suggests that generative video models could soon be used to produce those in-between shots for a fraction of the cost. This could also be done on the fly in later stages of production, without requiring a reshoot.

Michal Pechoucek, CTO at Gen Digital, the cybersecurity giant behind a range of antivirus brands including Norton and Avast, agrees. “I think this is where the technology is headed,” he says. “We’ll see many different models, each specifically trained in a certain domain of movie production. These will just be tools used by talented video production teams.”

We’re not there quite yet. A big problem with generative video is the lack of control users have over the output. Producing still images can be hit and miss; producing a few seconds of video is even more risky.

“Right now it’s still fun, you get a-ha moments,” says Miao. “But generating video that is exactly what you want is a very hard technical problem. We are some way off generating long, consistent videos from a single prompt.”

That’s why Vyond’s Lipkowitz thinks the technology isn’t yet ready for most corporate clients. These users want a lot more control over the look of a video than current tools give them, he says.

Thousands of companies around the world, including around 65% of the Fortune 500 firms, use Vyond’s platform to create animated videos for in-house communications, training, marketing, and more. Vyond draws on a range of generative models, including text-to-image and text-to-voice, but provides a simple drag-and-drop interface that lets users put together a video by hand, piece by piece, rather than generate a full clip with a click.

Running a generative model is like rolling dice, says Lipkowitz. “This is a hard no for most video production teams, particularly in the enterprise sector where everything must be pixel-perfect and on brand,” he says. “If the video turns out bad—maybe the characters have too many fingers, or maybe there is a company logo that is the wrong color—well, unlucky, that’s just how gen AI works.”

The solution? More data, more training, repeat. “I wish I could point to some sophisticated algorithms,” says Miao. “But no, it’s just a lot more learning.”

3. Misinformation isn’t new, but deepfakes will make it worse.

Online misinformation has been undermining our faith in the media, in institutions, and in each other for years. Some fear that adding fake video to the mix will destroy whatever pillars of shared reality we have left.

“We are replacing trust with mistrust, confusion, fear, and hate,” says Pechoucek. “Society without ground truth will degenerate.”

Pechoucek is especially worried about the malicious use of deepfakes in elections. During last year’s elections in Slovakia, for example, attackers shared a fake video that showed the leading candidate discussing plans to manipulate voters. The video was low quality and easy to spot as a deepfake. But Pechoucek believes it was enough to turn the result in favor of the other candidate.

“Adventurous Puppies” is a short clip made by OpenAI using with Sora.

John Wissinger, who leads the strategy and innovation teams at Blackbird AI, a firm that tracks and manages the spread of misinformation online, believes fake video will be most persuasive when it blends real and fake footage. Take two videos showing President Joe Biden walking across a stage. In one he stumbles, in the other he doesn’t. Who is to say which is real?

“Let’s say an event actually occurred, but the way it’s presented to me is subtly different,” says Wissinger. “That can affect my emotional response to it.” As Pechoucek noted, a fake video doesn’t even need to be that good to make an impact. A bad fake that fits existing biases will do more damage than a slick fake that doesn’t, says Wissinger.

That’s why Blackbird focuses on who is sharing what with whom. In some sense, whether something is true or false is less important than where it came from and how it is being spread, says Wissinger. His company already tracks low-tech misinformation, such as social media posts showing real images out of context. Generative technologies make things worse, but the problem of people presenting media in misleading ways, deliberately or otherwise, is not new, he says.

Throw bots into the mix, sharing and promoting misinformation on social networks, and things get messy. Just knowing that fake media is out there will sow seeds of doubt into bad-faith discourse. “You can see how pretty soon it could become impossible to discern between what’s synthesized and what’s real anymore,” says Wissinger.

4. We are facing a new online reality.

Fakes will soon be everywhere, from disinformation campaigns, to ad spots, to Hollywood blockbusters. So what can we do to figure out what’s real and what’s just fantasy? There are a range of solutions, but none will work by themselves.

The tech industry is working on the problem. Most generative tools try to enforce certain terms of use, such as preventing people from creating videos of public figures. But there are ways to bypass these filters, and open-source versions of the tools may come with more permissive policies.

Companies are also developing standards for watermarking AI-generated media and tools for detecting it. But not all tools will add watermarks, and watermarks can be stripped from a video’s metadata. No reliable detection tool exists either. Even if such tools worked, they would become part of a cat-and-mouse game of trying to keep up with advances in the models they are designed to police.

“Spaghetti Eating Will Smith” is a short film made by OpenAI using Sora.

Online platforms like X and Facebook have poor track records when it comes to moderation. We should not expect them to do better once the problem gets harder. Miao used to work at TikTok, where he helped build a moderation tool that detects video uploads that violate TikTok’s terms of use. Even he is wary of what’s coming: “There’s real danger out there,” he says. “Don’t trust things that you see on your laptop.” 

Blackbird has developed a tool called Compass, which lets you fact check articles and social media posts. Paste a link into the tool and a large language model generates a blurb drawn from trusted online sources (these are always open to review, says Wissinger) that gives some context for the linked material. The result is very similar to the community notes that sometimes get attached to controversial posts on sites like X, Facebook, and Instagram. The company envisions having Compass generate community notes for anything. “We’re working on it,” says Wissinger.

But people who put links into a fact-checking website are already pretty savvy—and many others may not know such tools exist, or may not be inclined to trust them. Misinformation also tends to travel far wider than any subsequent correction.

In the meantime, people disagree on whose problem this is in the first place. Pechoucek says tech companies need to open up their software to allow for more competition around safety and trust. That would also let cybersecurity firms like his develop third-party software to police this tech. It’s what happened 30 years ago when Windows had a malware problem, he says: “Microsoft let antivirus firms in to help protect Windows. As a result, the online world became a safer place.”

But Pechoucek isn’t too optimistic. “Technology developers need to build their tools with safety as the top objective,” he says. “But more people think about how to make the technology more powerful than worry about how to make it more safe.”

Made by OpenAI using Sora.

There’s a common fatalistic refrain in the tech industry: change is coming, deal with it. “Generative AI is not going to get uninvented,” says Raskino. “This may not be very popular, but I think it’s true: I don’t think tech companies can bear the full burden. At the end of the day, the best defense against any technology is a very well-educated public. There’s no shortcut.”

Miao agrees. “It’s inevitable that we will massively adopt generative technology,” he says. “But it’s also the responsibility of the whole of society. We need to educate people.” 

“Technology will move forward, and we need to be prepared for this change,” he adds. “We need to remind our parents, our friends, that the things they see on their screen might not be authentic.” This is especially true for older generations, he says: “Our parents need to be aware of this kind of danger. I think everyone should work together.”

We’ll need to work together quickly. When Sora came out a month ago, the tech world was stunned by how quickly generative video had progressed. But the vast majority of people have no idea this kind of technology even exists, says Wissinger: “They certainly don’t understand the trend lines that we’re on. I think it’s going to catch the world by storm.”

What to expect if you’re expecting a plug-in hybrid

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

If you’ve ever eaten at a fusion restaurant or seen an episode of Glee, you know a mashup can be a wonderful thing. 

Plug-in hybrid vehicles should be the mashup that the auto industry needs right now. They can run a short distance on a small battery in electric mode or take on longer drives with a secondary fuel, cutting emissions without asking people to commit to a fully electric vehicle.

But all that freedom can come with a bit of a complication: plug-in hybrids are what drivers make them. That can wind up being a bad thing because people tend to use electric mode less than expected, meaning emissions from the vehicles are higher than anticipated, as I covered in my latest story.

So are you a good match for a plug-in hybrid? Here’s what you should know about the vehicles.

Electric range is limited, and conditions matter

Plug-in hybrids have a very modest battery, and that’s reflected in their range. Models for sale today can generally get somewhere between 25 and 40 miles of electric driving (that’s 40 to 65 kilometers), with a few options getting up to around the 50-mile (80 km) mark.

But winter conditions can cut into that range. Even gas-powered vehicles see fuel economy drop in cold weather, but electric vehicles tend to take a harder hit. Battery-powered vehicles can see a 25% reduction in range in freezing temperatures, or even more depending on how hard the heaters need to work and what sort of driving you’re doing.

In the case of a plug-in hybrid with a small battery, these range cuts can be noticeable even for modest commutes. I spoke with one researcher for a story in 2022 who told me that he uses his plug-in hybrid in electric mode constantly for about nine months out of the year. Charging once overnight gets him to and from his job most of the time, but in the winter, his range shrinks enough to require gas for part of the trip.

It might not be a problem for you lucky folks in California or the south of Spain, but if you’re in a colder climate, you might want to take these range limitations into account. Parking in a warmer place like a garage can help, and you can even preheat your vehicle while it’s plugged in to extend your range.

Charging is a key consideration

Realistically, if you don’t have the ability to charge consistently at home, a plug-in hybrid may not be the best choice for you.

EV drivers who don’t live in single-family homes with attached garages can get creative with charging. Some New York City drivers I’ve spoken with rely entirely on public fast chargers, stopping for half an hour or so to juice up their vehicles as needed.

But plug-in hybrids generally aren’t equipped to handle fast charging speeds, so forget about plugging in at a Supercharger. The vehicles are probably best for people who have access to a charger at home, in a parking garage, or at work. Depending on battery capacity, charging a plug-in hybrid can take about eight hours on a level 1 charger, and two to three hours on a level 2 charger. 

Most drivers with plug-in hybrids wind up charging them less than what official estimates suggest. That means on average, drivers are producing more emissions than they might expect and probably spending more on fuel, too. For more on setting expectations around plug-in hybrids, read more in my latest story here.

We could see better plug-in models soon (in some places, at least)

For US drivers, state regulations could mean that plug-in offerings could expand soon.  

California recently adopted rules that require manufacturers to sell a higher proportion of low-emissions vehicles. Beginning in 2026, automakers will need clean vehicles to represent 35% of sales, ramping up to 100% in 2035. Several other states have hopped on board with the regulations, including New York, Massachusetts, and Washington.

Plug-in hybrids can qualify under the California rules, but only if they have at least 50 miles (80 km) of electric driving range. That means that we could be seeing more long-range plug-in options very soon, says Aaron Isenstadt, a senior researcher at the International Council on Clean Transportation.

Some other governments aren’t supporting plug-in hybrids, or are actively pushing drivers away from the vehicles and toward fully electric options. The European Union will end sales of gas-powered cars in 2035, including all types of hybrids.

Ultimately, plug-in hybrid vehicles can help reduce emissions from road transportation in the near term, especially for drivers who aren’t ready or willing to make the jump to fully electric cars just yet. But eventually, we’ll need to move on from compromises to fully zero-emissions options.  


Now read the rest of The Spark

Related reading

Real-world driving habits can get in the way of the theoretical benefits of plug-in hybrids. For more on why drivers might be the problem, give my latest story a read

Plug-in hybrids probably aren’t going away anytime soon, as I wrote in December 2022

Still have questions about hybrids and electric vehicles? I answered a few of them for a recent newsletter. Check it out here.

Another thing

China has emerged as a dominant force in climate technology, especially in the world of electric vehicles. If you want to dig into how that happened, and what it means for the future of addressing climate change, check out the latest in our Roundtables series here

For a sampling of what my colleagues got into in this conversation, check out this story from Zeyi Yang about how China came to lead the world in EVs, and this one about how EV giant BYD is getting into shipping

Keeping up with climate  

The US Department of Energy just awarded $6 billion to 33 projects aimed at decarbonizing industry, from cement and steel to paper and food. (Canary Media)

→ Among the winners: Sublime Systems and Brimstone, two startups working on alternative cement. Read more about climate’s hardest problem in my January feature story. (MIT Technology Review)

In the latest in concerning insurance news, State Farm announced it won’t be renewing policies for 72,000 property owners in California. As fire seasons get worse, insuring properties gets riskier. (Los Angeles Times)

Surprise! Big fossil-fuel companies aren’t aligned with goals to limit global warming. A think tank assessed the companies’ plans and found that despite splashy promises, none of the 25 largest oil and gas companies meet targets set by the Paris Agreement. (The Guardian)

An AI model can predict flooding five days in advance. This and other AI tools could help better forecast dangerous scenarios in remote places with fewer flood gauges. (Bloomberg)

Boeing’s 737 Max planes have been all over the news with incidents including a door flying off on a recent Alaska Airlines flight. Some experts say the problems can be traced back in part to the company’s corner-cutting on sustainability efforts. (Heated)

In Denver, e-bike vouchers get snapped up like Taylor Swift tickets. The city is aiming to lower the cost of the vehicles for residents in an effort to reduce the total number of car trips. It’s obviously a popular program, though some experts question whether the funding could be more effective elsewhere. (Grist)

A nuclear plant in New York was shut down in 2021—and predictably, emissions went up. It’s been a step back for clean energy in the state, as natural gas has stepped in to fill the gap. (The Guardian)

Germany used to be a solar superpower, but China has come to dominate the industry. Some domestic manufacturers aren’t giving up just yet, arguing that local production will be key to meeting ambitious clean-energy goals. (New York Times)

A company will pour 9,000 tons of sand into the sea in the name of carbon removal. Vesta’s pilot project just got a regulatory green light, and it’ll be a big step for efforts to boost the ocean’s ability to soak up carbon dioxide from the atmosphere. (Heatmap)

The Download: the problem with plug-in hybrids, and China’s AI talent

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are supposed to be the best of both worlds—the convenience of a gas-powered car with the climate benefits of a battery electric vehicle. But new data suggests that some official figures severely underestimate the emissions they produce.

According to new real-world driving data from Europe, plug-in hybrids produce roughly 3.5 times the emissions official estimates suggest. The difference is largely linked to driver habits: people tend to charge plug-in hybrids and drive them in electric mode less than expected.

It’s important to close the gap between expectations and reality not only for individuals’ sake, but also to ensure that policies aimed at cutting emissions have the intended effects. Read the full story.

—Casey Crownhart

Four things you need to know about China’s AI talent pool 

In 2019, MIT Technology Review covered a report that shined a light on how fast China’s AI talent pool was growing. Its main finding was pretty interesting: the number of elite AI scholars with Chinese origins had multiplied by 10 in the previous decade, but relatively few of them stayed in China for their work. The majority moved to the US. 

Now the think tank behind the report has published an updated analysis, showing how the makeup of global AI talent has changed since—during a critical period when the industry has shifted significantly and become the hottest technology sector. Here are the four main things you need to know about the global AI talent landscape today. 

—Zeyi Yang

This story is from China Report, our weekly newsletter giving you the inside track on all things happening in China. Sign up to receive it in your inbox every Tuesday.

AI could make better beer. Here’s how.

The news: Crafting a good-tasting beer is a difficult task. Big breweries select hundreds of trained tasters to test their new products. But running such sensory tasting panels is expensive, and perceptions of what tastes good can be highly subjective.

New AI models could help to lighten the load—accurately identifying not only how highly consumers will rate a certain Belgian beer, but also what kinds of compounds brewers should be adding to make the beer taste better.

Why it matters: These kinds of models could help food and drink manufacturers develop new products or tweak existing recipes to better suit the tastes of consumers, which could help save a lot of time and money. Read the full story.

—Rhiannon Williams

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Inside Silicon Valley’s intense AI talent wars
Companies are throwing $1 million compensation packages at top-tier candidates. (WSJ $)
+ The AI hype train is showing no signs of slowing. (Insider $)

2 Hydropower usage is falling in the Western US
The amount of hydropower generated in the region last year was the lowest in more than 20 years. (The Verge)
+ A startup is plotting the world’s largest ocean geoengineering plant. (IEEE Spectrum)
+ Emissions hit a record high in 2023. Blame hydropower. (MIT Technology Review)

3 Israel is conducting mass surveillance of Palestinians in Gaza
Facial recognition cameras are identifying civilians from lists of wanted persons. (NYT $)

4 Online conspiracy theories are spreading about the Baltimore bridge disaster
Unfounded theories are proliferating unchecked on X. (NBC News)
+ Entire narratives are playing out on Reddit, too. (404 Media)
+ X use in the US is in free fall. (The Guardian)

5 A Samsung-backed AI image platform generates non-consensual porn
Certain prompts easily circumvent the AI’s guardrails. (404 Media)
+ Text-to-image AI models can be tricked into generating disturbing images. (MIT Technology Review)

6 Apple has struck a major Chinese media deal for its Vision Pro
Tencent has agreed to make its apps available for the headset. (The Information $)
+ iPhone shipments are falling in China, according to new data. (Bloomberg $)
+ These minuscule pixels are poised to take augmented reality by storm. (MIT Technology Review)

7 Meta snooped on Snapchat’s traffic
In a bid to try and gain a competitive advantage. (TechCrunch

8 Accounting software isn’t exactly sexy
But it can be incredibly lucrative—in Sweden, at least. (FT $)
+ The software industry is asking for more government support in the UK. (Reuters)

9 AI is threatening foreign language education
Fewer people are learning other languages, and automatic translation is on the rise. (The Atlantic $)

10 Inside the search for quantum gravity
If one scientist’s theory is correct, we could discover gravitational rainbows. (New Scientist $)

Quote of the day

“The toilet isn’t quite done flushing here in the crypto industry.”

—Adam Jackson, co-founder of talent network Braintrust, tells Bloomberg why crypto funds seeking an injection of cash may be left disappointed.

The big story

Inside the decades-long fight over Yahoo’s misdeeds in China

December 2023

When you think of Big Tech these days, Yahoo is probably not top of mind. But for Chinese dissident Xu Wanping, the company still looms large—and has for nearly two decades.   

In 2005, Xu was arrested for signing online petitions relating to anti-Japanese protests. He didn’t use his real name, but he did use his Yahoo email address. Yahoo China violated its users’ trust—providing information on certain email accounts to Chinese law enforcement, which in turn allowed the government to identify and arrest some users. 

Xu was one of them; he would serve nine years in prison. Now, he and five other Chinese former political prisoners are suing Yahoo and a slate of co-defendants—not because of the company’s information-sharing (which was the focus of an earlier lawsuit filed by other plaintiffs), but rather because of what came after. Read the full story.

—Eileen Guo

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ When is a hedgehog not a hedgehog? When it’s a hat bobble.
+ Biff from Back to the Future has had enough of answering all the big questions.
+ Paris’ annual waiter race is chef’s kiss.
+  These BMX kitties are the sweetest of the sweet.

Four things you need to know about China’s AI talent pool 

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

In 2019, MIT Technology Review covered a report that shined a light on how fast China’s AI talent pool was growing. Its main finding was pretty interesting: the number of elite AI scholars with Chinese origins had multiplied by 10 in the previous decade, but relatively few of them stayed in China for their work. The majority moved to the US. 

Now the think tank behind the report has published an updated analysis, showing how the makeup of global AI talent has changed since—during a critical period when the industry has shifted significantly and become the hottest technology sector. 

The team at MacroPolo, the think tank of the Paulson Institute, an organization that focuses on US-China relations, studied the national origin, educational background, and current work affiliation of top researchers who gave presentations and had papers accepted at NeurIPS, a top academic conference on AI. Their analysis of the 2019 conference resulted in the first iteration of the Global AI Talent Tracker. They’ve analyzed the December 2022 NeurIPS conference for an update three years later.

I recommend you read the original report, which has a very well-designed infographic that shows the talent flow across countries. But to save you some time, I also talked to the authors and highlighted what I think are the most surprising or important takeaways from the new report. Here are the four main things you need to know about the global AI talent landscape today. 

1.  China has become an even more important country for training AI talent.

Even in 2019, Chinese researchers were already a significant part of the global AI community, making up one-tenth of the most elite AI researchers. In 2022, they accounted for 26%, almost dethroning the US (American researchers accounted for 28%). 

“Timing matters,” says Ruihan Huang, senior research associate at MacroPolo and one of the lead authors. “The last three years have seen China dramatically expand AI programs across its university system—now there are some 2,000 AI majors—because it was also building an AI industry to absorb that talent.” 

As a result of these university and industry efforts, many more students in computer science or other STEM majors have joined the AI industry, making Chinese researchers the backbone of cutting-edge AI research.

2. AI researchers now tend to stay in the country where they receive their graduate degree. 

This is perhaps intuitive, but the numbers are still surprisingly high: 80% of AI researchers who went to a graduate school in the US stayed to work in the US, while 90% of their peers who went to a graduate school in China stayed in China.

In a world where major countries are competing with each other to take the lead in AI development, this finding suggests a trick they could use to expand their research capacity: invest in graduate-level institutions and attract overseas students to come. 

This is particularly important in the US-China context, where the souring of the relationship between the two countries has affected the academic field. According to news reports, quite a few Chinese graduate students have been interrogated at the US border or even denied entry in recent years, as a Trump-era policy persisted. Along with the border restrictions imposed during the pandemic years, this hostility could have prevented more Chinese AI experts from coming to the US to learn and work. 

3. The US still overwhelmingly attracts the most AI talent, but China is catching up.

In both 2019 and 2022, the United States topped the rankings in terms of where elite AI researchers work. But it’s also clear that the distance between the US and other countries, particularly China, has shortened. In 2019, almost three-fifths of top AI researchers worked in the US; only two-fifths worked here in 2022. 

“The thing about elite talent is that they generally want to work at the most cutting-edge and dynamic places. They want to do incredible work and be rewarded for it,” says AJ Cortese, a senior research associate at MacroPolo and another of the main authors. “So far, the United States still leads the way in having that AI ecosystem—from leading institutions to companies—that appeals to top talent.”

In 2022, 28% of the top AI researchers were working in China. This significant portion speaks to the growth of the domestic AI sector in China and the job opportunities it has created. Compared with 2019, three more Chinese universities and one company (Huawei) made it into the top tier of institutions that produce AI research. 

It’s true that most Chinese AI companies are still considered to lag behind their US peers—for example, China usually trails the US by a few months in releasing comparable generative AI models. However, it seems like they have started catching up.

4. Top-tier AI researchers now are more willing to work in their home countries.

This is perhaps the biggest and also most surprising change in the data, in my opinion. Like their Chinese peers, more Indian AI researchers ended up staying in their home country for work.

In fact, this seems to be a broader pattern across the board: it used to be that more than half of AI researchers worked in a country different from their home. Now, the balance has tipped in favor of working in their own countries. 

This is good news for countries trying to catch up with the US research lead in AI. “It goes without saying most countries would prefer ‘brain gain’ over ‘brain drain’—especially when it comes to a highly complex and technical discipline like AI,” Cortese says. 

It’s not easy to create an environment and culture that not only retains its own talents but manages to pull scholars from other countries, but lots of countries are now working on it. I can only begin to imagine what the report might look like in a few years.  

Did anything else stand out to you in the report? Let me know your thoughts by writing to zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. The Dutch prime minister will visit China this week to discuss with Chinese president Xi Jinping whether the Dutch chipmaking equipment company ASML can keep servicing Chinese clients. (Reuters $)

  • Here’s an inside look into ASML’s factory and how it managed to dominate advanced chipmaking. (MIT Technology Review)

2. Hong Kong passed a tough national security law that makes it more dangerous to protest Beijing’s rule. (BBC)

3. A new bill in France suggests imposing hefty fines on Shein and similar ultrafast-fashion companies for their negative environmental impact—as much as $11 per item that they sell in France. (Nikkei Asia)

4. Huawei filed a patent to make more advanced chips with a low-tech workaround. (Bloomberg $)

  • Meanwhile, a US official accused the Chinese chip foundry SMIC of breaking US law by making a chip for Huawei. (South China Morning Post $)

5. Instead of the usual six and a half days a week, Tesla has instructed its Shanghai factory to reduce production to five days a week. The slowdown of EV sales in China could be the reason. (Bloomberg $)

6. TikTok is still having plenty of troubles. A new political TV ad (paid for by a mysterious new nonprofit), playing in three US swing states, attacks Zhang Fuping, a ByteDance vice president that very few people have heard of. (Punchbowl News)

  • As TikTok still hasn’t reached a licensing deal with Universal Music Group, users have had to get creative to find alternative soundtracks for their videos. (Billboard)

7. China launched a communications satellite that will help relay signals for missions to explore the dark side of the moon. (Reuters $)

Lost in translation

The most-hyped generative AI app in China these days is Kimi, according to the Chinese publication Sina Tech. Released by Moonshot AI, a Chinese “unicorn” startup, Kimi made headlines last week when it announced it had started supporting inputting text using over 2 million Chinese characters. (For comparison, OpenAI’s GPT-4 Turbo currently supports inputting 100,000 Chinese characters, while Claude3-200K supports about 160,000 characters.)

While some of the app’s virality can be credited to a marketing push that intensified recently. Chinese users are now busy feeding popular and classic books to the model and testing how well it can understand the context. Feeling threatened, other Chinese AI apps owned by tech giants like Baidu and Alibaba have followed suit, announcing that they will soon support 5 million or even 10 million Chinese characters. But processing large amounts of text, while impressive, is very costly in the generative AI age—and some observers worry this isn’t the commercial direction that companies ought to head in.

One more thing

Fluffy pajamas, sweatpants, outdated attire: young Chinese people are dressing themselves in “gross outfits” to work—an intentional provocation to their bosses and an expression of silent resistance to the trend that glorifies career hustle. “I just don’t think it’s worth spending money to dress up for work, since I’m just sitting there,” one of them told the New York Times.

Update: The story has been updated to clarify the affiliation of the report authors.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are supposed to be the best of both worlds—the convenience of a gas-powered car with the climate benefits of a battery electric vehicle. But new data suggests that some official figures severely underestimate the emissions they produce. 

According to new real-world driving data from the European Commission, plug-in hybrids produce roughly 3.5 times the emissions official estimates suggest. The difference is largely linked to driver habits: people tend to charge plug-in hybrids and drive them in electric mode less than expected.

“The environmental impact of these vehicles is much, much worse than what the official numbers would indicate,” says Jan Dornoff, a research lead at the International Council on Clean Transportation.

While conventional hybrid vehicles contain only a small battery to slightly improve fuel economy, plug-in hybrids allow fully electric driving for short distances. These plug-in vehicles typically have a range of roughly 30 to 50 miles (50 to 80 kilometers) in electric driving mode, with a longer additional range when using the secondary fuel, like gasoline or diesel. But drivers appear to be using much more fuel than was estimated.

According to the new European Commission report, drivers in plug-in hybrid vehicles produce about 139.4 grams of carbon dioxide for every kilometer driven, based on measurements of how much fuel vehicles use over time. On the other hand, official estimates from manufacturers, which are determined using laboratory tests, put emissions at 39.6 grams per kilometer driven.

Some of this gap can be explained by differences between the controlled conditions in a lab and real-world driving. Even conventional combustion-engine vehicles tend to have higher real-world emissions than official estimates suggest, though the gap is roughly 20%, not 200% or more as it is for plug-in hybrids.

The major difference comes down to how drivers tend to use plug-in hybrids. Researchers have noticed the problem in previous studies, some of them using crowdsourced data. 

In one study from the ICCT published in 2022, researchers examined real-world driving habits of people in plug-in hybrids. While the method used to determine official emissions values estimated that drivers use electricity to power vehicles 70% to 85% of the time, the real-world driving data suggested that vehicle owners actually used electric mode for 45% to 49% of their driving. And if vehicles were company-provided cars, the average was only 11% to 15%.

The difference between reality and estimates can be a problem for drivers, who may buy plug-in hybrids expecting climate benefits and gas savings. But if drivers are charging less than expected, the benefits might not be as drastic as promised. Trips taken in a plug-in hybrid cut emissions by only 23% relative to trips in a conventional vehicle, rather than the nearly three-quarters reduction predicted by official estimates, according to the new analysis.

“People need to be realistic about what they face,” Dornoff says. Driving the vehicles in electric mode as much as possible can help maximize the financial and environmental benefits, he adds.

It’s important to close the gap between expectations and reality not only for individuals’ sake, but also to ensure that policies aimed at cutting emissions have the intended effects. 

The European Union passed a law last year that will end sales of gas-powered cars in 2035. This is aimed at cutting emissions from transportation, a sector that makes up around one-fifth of global emissions. In the EU, manufacturers are required to have a certain average emissions value for all their vehicles sold. If plug-in hybrids are performing much worse in the real world than expected, it could mean the transportation sector is actually making less progress toward climate goals than it’s getting credit for.

Plug-in hybrids’ failure to meet expectations is also a problem in the US, says Aaron Isenstadt, a senior researcher at the ICCT based in San Francisco. Real-world fuel consumption was about 50% higher than EPA estimates in one ICCT study, for example. The gap between expectations and reality is smaller in the US partly because official emissions estimates are calculated differently, and partly because US drivers have different driving habits and may have better access to charging at home, Isenstadt says.

The Biden administration recently finalized new tailpipe emissions rules, which set guidelines for manufacturers about the emissions their vehicles can produce. The rules aim at ramping down emissions from new vehicles sold, so by 2032, roughly half of new cars sold in the US will need to produce zero emissions in order to meet the standards.

Both the EU and the US have plans to update estimates about how drivers are using plug-in hybrids, which should help policies in both markets better reflect reality. The EU will make an adjustment to estimates about driver behavior beginning in 2025, while the US will do so later, in 2027.

AI could make better beer. Here’s how.

Crafting a good-tasting beer is a difficult task. Big breweries select hundreds of trained tasters from among their employees to test their new products. But running such sensory tasting panels is expensive, and perceptions of what tastes good can be highly subjective.  

What if artificial intelligence could help lighten the load? New AI models can accurately identify not only how highly consumers will rate a certain Belgian beer, but also what kinds of compounds brewers should be adding to make the beer taste better, according to research published in Nature Communications today.

These kinds of models could help food and drink manufacturers develop new products or tweak existing recipes to better suit the tastes of consumers, which could help save a lot of time and money that would have gone into running trials. 

To train their AI models, the researchers spent five years chemically analyzing 250 commercial beers, measuring each beer’s chemical properties and flavor compounds—which dictate how it’ll taste. 

The researchers then combined these detailed analyses with a trained tasting panel’s assessments of the beers—including hop, yeast, and malt flavors—and 180,000 reviews of the same beers taken from the popular online platform RateBeer, sampling scores for the beers’ taste, appearance, aroma, and overall quality.

This large data set, which links chemical data with sensory features, was used to train 10 machine-learning models to accurately predict a beer’s taste, smell, and mouthfeel and how likely a consumer was to rate it highly. 

To compare the models, they split the data into a training set and a test set. Once a model was trained on the data within the training set, they evaluated its ability to predict the test set.

The researchers found that all the models were better than the trained panel of human experts at predicting the rating a beer had received from RateBeer.

Through these models, the researchers were able to pinpoint specific compounds that contribute to consumer appreciation of a beer: people were more likely to rate a beer highly if it contained these specific compounds. For example, the models predicted that adding lactic acid, which is present in tart-tasting sour beers, could improve other kinds of beers by making them taste fresher.

“We had the models analyze these beers and then asked them ‘How can we make these beers better?’” says Kevin Verstrepen, a professor at KU Leuven and director of the VIB-KU Leuven Center for Microbiology, who worked on the project. “Then we went in and actually made those changes to the beers by adding flavor compounds. And lo and behold—once we did blind tastings, the beers became better, and more generally appreciated.”

One exciting application of the research is that it could be used to make better alcohol-free beers—a major challenge for the beverage industry, he says. The researchers used the model’s predictions to add a mixture of compounds to a nonalcoholic beer that human tasters rated significantly higher in terms of body and sweetness than its previous incarnation.

This type of machine-learning approach could also be enormously useful in exploring food texture and nutrition and adapting ingredients to suit different populations, says Carolyn Ross, a professor of food science at Washington State University, who was not involved in the research. For example, older people tend to find complex combinations of textures or ingredients less appealing, she says. 

“There’s so much that we can explore there, especially when we’re looking at different populations and trying to come up with specific products for them,” she says.

The Download: Adobe’s AI ambitions, and how work is changing

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How Adobe’s bet on non-exploitative AI is paying off

Since the beginning of the generative AI boom, there has been a fight over how large AI models are trained. In one camp sit tech companies such as OpenAI that claim it is “impossible” to train AI without copyrighted data. And in the other camp are artists who argue that AI companies have taken their intellectual property without consent or compensation.

Adobe is pretty unusual in siding with the latter group, with an approach that stands out as an example of how generative AI products can be built without scraping copyrighted data from the internet. It released its image-generating model Firefly, which is integrated into its popular photo editing tool Photoshop, one year ago.

In an exclusive interview with MIT Technology Review, Adobe’s AI leaders are adamant this is the only way forward. At stake is not just the livelihood of creators, they say, but our whole information ecosystem. Read the full story.

—Melissa Heikkilä

How AI is changing the way we work

AI is fundamentally transforming the nature of work for people and the organizations that employ them.

We’re holding a free LinkedIn Live session about how AI is changing the way we work at midday ET today, delving into everything from the economic impacts on employers to the new jobs being created—or lost. Register here to join the conversation—our editors and reporters are looking forward to hearing your thoughts!

Meet the MIT Technology Review AI team in London

The UK is home to AI powerhouse Google DeepMind, a slew of exciting AI startups, and some of the world’s best universities. It’s also where a sizable chunk of the MIT Technology Review team live, including our senior AI editor Will Douglas Heaven and senior AI reporter Melissa Heikkilä (and me!)

We’re gathering some of the brightest minds in AI in Europe for our flagship AI conference, EmTech Digital, in London on April 16 and 17. Our speakers include top figures from the likes of Meta, Google DeepMind, AI avatar company Synthesia, and NVIDIA. Read more about what you can expect in the latest edition of The Algorithm, our weekly AI newsletter, and register for the event itself here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Florida has approved a law banning children under 14 from social media
It’s one of the most restrictive measures a US state has passed to date. (NYT $)
+ Social platforms will be required to delete existing accounts belonging to under-14s. (WP $)
+ Child online safety laws will actually hurt kids, critics say. (MIT Technology Review)

2 AI could make society much, much richer
Economists are excited by its potential, but not everyone agrees. (Vox)
+ ChatGPT is about to revolutionize the economy. We need to decide what that looks like. (MIT Technology Review

3 The US and UK have sanctioned Chinese state-sponsored hackers
A 14-year hacking campaign targeted critics, politicians and businesses. (WP $)
+ British politicians are being urged by spies to use disappearing messages. (FT $)

4 The US Supreme Court is set to hear its first post-Roe abortion case
It’s considering whether access to abortion pills should be restricted even further. (The Economist $)
+ The stakes for abortion rights couldn’t be higher. (Wired $)
+ The country’s anti-abortion movement is affecting access to IVF, too. (Vox)

5 X has lost a lawsuit against an anti-hate speech nonprofit
The US judge dismissed it as a ‘vapid’ attempt to punish the group. (The Guardian)

6 Things are looking up for FTX customers
It’s looking like they’ll get a lot more money back than originally thought. (FT $)

7 You can’t opt out of Google Search’s chatbot anymore
The company wants feedback, and it wants it now. (Ars Technica)
+ Why you shouldn’t trust AI search engines. (MIT Technology Review)

8 How drones are becoming a valuable tool for animal rights activists
Eyes in the skies can help them to uncover wrongdoing on a colossal scale. (The Guardian)
+ The robots are coming. And that’s a good thing. (MIT Technology Review)

9 Even spies need a good coworking space
Specialist offices designed for dealing with highly sensitive information are on the rise. (Bloomberg $)

10 Meta is hiring AI researchers without even interviewing them
Even Mark Zuckerberg is getting involved and messaging would-be candidates himself. (The Information $)

Quote of the day

“There are holes a mile deep in this guy’s resume, but he’s managed to figure out how to take his chess pieces and move them correctly.”

—A disgruntled startup founder takes aim at the hype surrounding OpenAI founder Sam Altman, Insider reports.

The big story

What happens when you donate your body to science

October 2022

Rebecca George doesn’t mind the vultures that complain from the trees that surround the Western Carolina University body farm. George studies human decomposition, and part of decomposing is becoming food. Scavengers are welcome.

George, a forensic anthropologist, places the body of a donor in the Forensic Osteology Research Station—known as the FOREST. This is Enclosure One, where donors decompose naturally above ground. Nearby is Enclosure Two, where researchers study bodies that have been buried in soil. She is the facility’s curator, and monitors the donors—sometimes for years—as they become nothing but bones.

In the US, about 20,000 people or their families donate their bodies to scientific research and education each year. Whatever the reason, the decision becomes a gift. Western Carolina’s FOREST is among the places where watchful caretakers know that the dead and the living are deeply connected, and the way you treat the first reflects how you treat the second. Read the full story.

—Abby Ohlheiser

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ If you fancy trying to spot the solar eclipse on 8 April, these cities are your best bet.
+ Huge relief in Scotland, after a stolen gorilla statue was recovered after a year on the loose.
+ Rollercoaster Tycoon is the game that shaped a generation.
+ Nothing but respect for Ilia Malinin, the American teenage figure skater who delivered a winning performance this weekend to the Succession theme tune.

Meet the MIT Technology Review AI team in London

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The UK is home to AI powerhouse Google DeepMind, a slew of exciting AI startups, and some of the world’s best universities. It’s also where I live, along with quite a few of my MIT Technology Review colleagues, including our senior AI editor Will Douglas Heaven. 

That’s why I’m super stoked to tell you that we’re gathering some of the brightest minds in AI in Europe for our flagship AI conference, EmTech Digital, in London on April 16 and 17. 

Our speakers include top figures like Zoubin Ghahramani, vice president of research at Google DeepMind; Maja Pantic, AI scientific research lead at Meta; Dragoș Tudorache, a member of the European Parliament and one of the key politicians behind the newly passed EU AI Act; and Victor Riparbelli, CEO of AI avatar company Synthesia. 

We’ll also hear from executives at NVIDIA, Roblox, Faculty, and ElevenLabs, and researchers from the UK’s top universities and AI research institutes. 

They will share their wisdom on how to harness AI and what businesses need to know right now about this transformative technology. 

Here are some sessions I am particularly excited about.

Generating AI’s Path Forward
Where is AI going next? Zoubin Ghahramani, vice president of research at Google DeepMind, will map out realistic timelines for new innovation, and he will discuss the need for an overall strategy for a safe and productive AI future for Europe and beyond.

Digital Assistants for AI Automation
You’ve perhaps heard of AI assistants. But in this session, David Barber, director of the Centre for Artificial Intelligence at University College London, will argue that a major transformation will come with the rise of AI agents, which can complete complex sets of actions such as booking travel, answering messages, and performing data entry. 

AI’s Impact on Democracy
A senior official from the UK’s National Cyber Security Centre will walk us through some of the threats posed by AI that keep him up at night. Based on our speaker prep call, I can tell you that real life really is stranger than fiction. 

The AI Act’s Impacts on Policy and Regulations
The AI Act is here, and companies in the US and the UK will have to comply with it if they want to do business in the EU. I will be sitting down with Dragoș Tudorache, one of the key politicians behind the law, to walk you through what companies need to take into account right now. 

Venturing into AI Opportunity
The European startup scene has long played second fiddle to the US. But with the rise of open-source AI unicorn Mistral and others, hopes are rising that European startups could become more competitive in the global AI marketplace. Paul Murphy, a partner at venture capital firm Lightspeed, one of the first funds to invest in Mistral, will tell us all about his predictions. 

The Business of Solving Big Challenges with AI
Colin Murdoch, Google DeepMind’s chief business officer, will show us why AI is so much more than generative AI and how it can help solve society’s greatest challenges, from gene editing to sustainable energy and computing. 

And the best bit of all: the post-conference drinks! A conference in London would not be nearly as fun without some good old-fashioned networking in a pub afterward. So join us April 16–17 in London, and get the inside scoop on how AI is transforming the world. Get your tickets here

Before you go… We have a freebie to give you a taster of the event. Join me and MIT Technology Review’s editors Niall Firth and David Rotman for a free half-hour LinkedIn Live session today, March 26. We’ll discuss how AI is changing the way we work. Bring your questions and tune in here  at 4pm GMT/12pm EDT/9am EDT.


Now read the rest of The Algorithm

Deeper Learning

The tech industry can’t agree on what open-source AI means. That’s a problem.

Suddenly, “open source” is the latest buzzword in AI circles. Meta has pledged to create open-source artificial general intelligence. And Elon Musk is suing OpenAI over its lack of open-source AI models. Meanwhile, a growing number of tech leaders and companies are setting themselves up as open-source champions. But there’s a fundamental problem—no one can agree on what “open-source AI” means. 

Definitions wanted: Open-source AI promises a future where anyone can take part in the technology’s development. That could accelerate innovation, boost transparency, and give users greater control over systems that could soon reshape many aspects of our lives. But what even is it? What makes an AI model open source, and what disqualifies it? The answers could have significant ramifications for the future of the technology. Read more from Edd Gent.

Bits and Bytes

Apple researchers are exploring dropping “Hey Siri” and listening with AI instead
So maybe our phones will be listening to us all the time after all? New research aims to see if AI models can determine when you’re speaking to your phone without needing a trigger phrase. They also show how Apple, considered a laggard in AI, is determined to catch up. (MIT Technology Review)

An AI-driven “factory of drugs” claims to have hit a big milestone
Insilico is part of a wave of companies betting on AI as the “next amazing revolution” in biology. The company claims to have created the first “true AI drug” that’s advanced to a test of whether it can cure a fatal lung condition in humans. (MIT Technology Review

Chinese platforms are cracking down on influencers selling AI lessons
Over the last year, a few Chinese influencers have made millions of dollars peddling short video lessons on AI, profiting off people’s fears about the as-yet-unclear impact of the new technology on their livelihoods. Now the platforms they thrived on have started to turn against them. (MIT Technology Review

Google DeepMind’s new AI assistant helps elite soccer coaches get even better
The system can predict the outcome of corner kicks and provide realistic and accurate tactical suggestions in matches. The system, called TacticAI, works by analyzing a dataset of 7,176 corner kicks taken by players for Liverpool FC, one of the world’s biggest soccer clubs. (MIT Technology Review)

How AI taught Cassie the two-legged robot to run and jump
Researchers used an AI technique called reinforcement learning to help a two-legged robot nicknamed Cassie run 400 meters, over varying terrains, and execute standing long jumps and high jumps, without being trained explicitly on each movement. (MIT Technology Review)

France fined Google €250 million over copyright infringements 
The country’s competition watchdog says the tech company failed to broker fair agreements with media outlets for publishing links to their content and plundered press articles to train its AI technology without informing the publishers. This sets an interesting precedent for AI and copyright in Europe, and potentially beyond. (Bloomberg

China is educating the next generation of top AI talent
New research suggests that China has eclipsed the United States as the biggest producer of AI talent. (New York Times

DeepMind’s cofounder has ditched his startup to lead Microsoft’s AI initiative
Mustafa Suleyman  has now left his conversational AI startup Inflection to lead Microsoft AI, a new organization focused on advancing Microsoft’s Copilot and other consumer AI products. (Microsoft)