Are we making spacecraft too autonomous?

When SpaceX’s Crew Dragon took NASA astronauts to the ISS near the end of May, the launch brought back a familiar sight. For the first time since the space shuttle was retired, American rockets were launching from American soil to take Americans into space.

Inside the vehicle, however, things couldn’t have looked more different. Gone was the sprawling dashboard of lights and switches and knobs that once dominated the space shuttle’s interior. All of it was replaced with a futuristic console of multiple large touch screens that cycle through a variety of displays. Behind those screens, the vehicle is run by software that’s designed to get into space and navigate to the space station completely autonomously. 

“Growing up as a pilot, my whole career, having a certain way to control a vehicle—this is certainly different,” Doug Hurley told NASA TV viewers shortly before the SpaceX mission. Instead of calling for a hand on the control stick, navigation is now a series of predetermined inputs. The SpaceX astronauts may still be involved in decision-making at critical junctures, but much of that function has moved out of their hands.

Does this matter? Software has never played a more critical role in spaceflight. It has made it safer and more efficient, allowing a spacecraft to automatically adjust to changing conditions. According to Darrel Raines, a NASA engineer leading software development for the Orion deep space capsule, autonomy is particularly key for areas of “critical response time”—like the ascent of a rocket after liftoff, when a problem might require initiating an abort sequence in just a matter of seconds. Or in instances where the crew might be incapacitated for some reason. 

And increased autonomy is practically essential to making some forms of spaceflight even work. Ad Astra is a Houston-based company that’s looking to make plasma rocket propulsion technology viable. The experimental engine uses plasma made out of argon gas, which is heated using electromagnetic waves. A “tuning” process overseen by the system’s software automatically figures out the optimal frequencies for this heating. The engine comes to full power in just a few milliseconds. “There’s no way for a human to respond to something like that in time,” says CEO Franklin Chang Díaz, a former astronaut who flew on several space shuttle missions from 1986 to 2002. Algorithms in the control system are used to recognize changing conditions in the rocket as it’s moving through the startup sequence—and act accordingly. “We wouldn’t be able to do any of this well without software,” he says.

But overrelying on software and autonomous systems in spaceflight creates new opportunities for problems to arise. That’s especially a concern for many of the space industry’s new contenders, who aren’t necessarily used to the kind of aggressive and comprehensive testing needed to weed out problems in software and are still trying to strike a good balance between automation and manual control.

  • space shuttle Atlantis
  • inflight video of Dragon 2 mission

Nowadays, a few errors in over one million lines of code could spell the difference between mission success and mission failure. We saw that late last year, when Boeing’s Starliner capsule (the other vehicle NASA is counting on to send American astronauts into space) failed to make it to the ISS because of a glitch in its internal timer. A human pilot could have overridden the glitch that ended up burning Starliner’s thrusters prematurely. NASA administrator Jim Bridenstine remarked soon after Starliner’s problems arose: “Had we had an astronaut on board, we very well may be at the International Space Station right now.” 

But it was later revealed that many other errors in the software had not been caught before launch, including one that could have led to the destruction of the spacecraft. And that was something human crew members could easily have overridden.

Boeing is certainly no stranger to building and testing spaceflight technologies, so it was a surprise to see the company fail to catch these problems before the Starliner test flight. “Software defects, particularly in complex spacecraft code, are not unexpected,” NASA said when the second glitch was made public. “However, there were numerous instances where the Boeing software quality processes either should have or could have uncovered the defects.” Boeing declined a request for comment.

According to Luke Schreier, the vice president and general manager of aerospace at NI (formerly National Instruments), problems in software are inevitable, whether in autonomous vehicles or in spacecraft. “That’s just life,” he says. The only real solution is to aggressively test ahead of time to find those issues and fix them: “You have to have a really rigorous software testing program to find those mistakes that will inevitably be there.”

Enter AI

Space, however, is a unique environment to test for. The conditions a spacecraft will encounter aren’t easy to emulate on the ground. While an autonomous vehicle can be taken out of the simulator and eased into lighter real-world conditions to refine the software little by little, you can’t really do the same thing for a launch vehicle. Launch, spaceflight, and a return to Earth are actions that either happen or they don’t—there is no “light” version.

This, says Schreier, is why AI is such a big deal in spaceflight nowadays—you can develop an autonomous system that is capable of anticipating those conditions, rather than requiring the conditions to be learned during a specific simulation. “You couldn’t possibly simulate on your own all the corner cases of the new hardware you’re designing,” he says. 

So for some groups, testing software isn’t just a matter of finding and fixing errors in the code; it’s also a way to train AI-driven software. Take Virgin Orbit, for example, which recently tried to send its LauncherOne vehicle into space for the first time. The company worked with NI to develop a test bench that looped together all the vehicle’s sensors and avionics with the software meant to run a mission into orbit (down to the exact length of wiring used within the vehicle). By the time LauncherOne was ready to fly, it believed it had already been in space thousands of times thanks to the testing, and it had already faced many different kinds of scenarios.

Of course, the LauncherOne’s first test flight ended in failure, for reasons that have still not been disclosed. If it was due to software limitations, the attempt is yet another sign there’s a limit to how much an AI can be trained to face real-world conditions. 

Raines adds that in contrast to the slower approach NASA takes for testing, private companies are able to move much more rapidly. For some, like SpaceX, this works out well. For others, like Boeing, it can lead to some surprising hiccups. 

Ultimately, “the worst thing you can do is make something fully manual or fully autonomous,” says Nathan Uitenbroek, another NASA engineer working on Orion’s software development. Humans have to be able to intervene if the software is glitching up or if the computer’s memory is destroyed by an unanticipated event (like a blast of cosmic rays). But they also rely on the software to inform them when other problems arise. 

NASA is used to figuring out this balance, and it has redundancy built into its crewed vehicles. The space shuttle operated on multiple computers using the same software, and if one had a problem, the others could take over. A separate computer ran on entirely different software, so it could take over the entire spacecraft if a systemic glitch was affecting the others. Raines and Uitenbroek say the same redundancy is used on Orion, which also includes a layer of automatic function that bypasses the software entirely for critical functions like parachute release. 

On the Crew Dragon, there are instances where astronauts can manually initiate abort sequences, and where they can override software on the basis of new inputs. But the design of these vehicles means it’s more difficult now for the human to take complete control. The touch-screen console is still tied to the spacecraft’s software, and you can’t just bypass it entirely when you want to take over the spacecraft, even in an emergency. 

There’s no consensus on how much further the human role in spaceflight will—or should—shrink. Uitenbroek thinks trying to develop software that can account for every possible contingency is simply impractical, especially when you have deadlines to make. 

Chang Díaz disagrees, saying the world is shifting “to a point where eventually the human is going to be taken out of the equation.” 

Which approach wins out may depend on the level of success achieved by the different parties sending people into space. NASA has no intention of taking humans out of the equation, but if commercial companies find they have an easier time minimizing the human pilot’s role and letting the AI take charge, than touch screens and pilotless flight to the ISS are only a taste of what’s to come.

Another experimental covid-19 vaccine has shown promising early results

The news: An experimental covid-19 vaccine being developed by Pfizer and BioNTech provoked immune responses in 45 healthy volunteers, according to a preprint paper on medRXiv. The levels of antibodies were up to 2.8 times the level of those found in patients who have recovered. The study randomly assigned 45 people to get either one of three doses of the vaccine or a placebo. But there were side effects like fatigue, headache, and fever—especially at higher doses. The researchers decided to discontinue with the highest dose, 100 micrograms, after the first round of treatments.

Some caveats required: It’s promising news,but this is the first clinical data on this specific vaccine, and it hasn’t been through the process of peer review yet. Higher antibody levels in patients who’d received the vaccine are a useful proxy for immunity to covid-19, but we don’t yet know for sure that they guarantee immunity. In order to find out, Pfizer will start conducting studies in larger groups of patients, starting this summer. It says its goal is to have 100 million doses of a vaccine available by the end of 2020.

A common approach: Pfizer is using the same experimental technique as Moderna, one of the other pharmaceutical companies developing a vaccine. Both vaccines are designed to provoke an immune response against the coronavirus through its messenger RNA, the genetic instructions that tell the virus how to replicate inside the host. The method could provide a rapid way to develop a vaccine, but it’s yet to lead to a licensed one for sale. Currently, 178 vaccines are in various stages of development; 17 are now going through clinical trials.

A plan to redesign the internet could make apps that no one controls

In 1996 John Perry Barlow, cofounder of internet rights group the Electronic Frontier Foundation, wrote “A declaration of the independence of cyberspace.” It begins: “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.”

Barlow was reacting to the US Communications Decency Act, an early attempt to regulate online content, which he saw as overreaching. But the broad vision he put forward of a free and open internet controlled by its users was one that many internet pioneers shared.

Fast-forward a quarter-century and that vision feels naïve. Governments may have struggled to regulate the internet, but new sovereigns have taken over instead. Barlow’s “home of Mind” is ruled today by the likes of Google, Facebook, Amazon, Alibaba, Tencent, and Baidu—a small handful of the biggest companies on earth.

Yet listening to the mix of computer scientists and tech investors speak at an online event on June 30 hosted by the Dfinity Foundation, a not-for-profit organization headquartered in Zurich, Switzerland, it is clear that a desire for revolution is brewing. “We’re taking the internet back to a time when it provided this open environment for creativity and economic growth, a free market where services could connect on equal terms,” says Dominic Williams, Dfinity’s founder and chief scientist. “We want to give the internet its mojo back.”

Dfinity is building what it calls the internet computer, a decentralized technology spread across a network of independent data centers that allows software to run anywhere on the internet rather than in server farms that are increasingly controlled by large firms, such as Amazon Web Services or Google Cloud. This week Dfinity is releasing its software to third-party developers, who it hopes will start making the internet computer’s killer apps. It is planning a public release later this year.

Rewinding the internet is not about nostalgia. The dominance of a few companies, and the ad-tech industry that supports them, has distorted the way we communicate—pulling public discourse into a gravity well of hate speech and misinformation—and upended basic norms of privacy. There are few places online beyond the reach of these tech giants, and few apps or services that thrive outside of their ecosystems.

There is an economic problem too. The effective monopoly of these firms stifles the kind of innovation that spawned them in the first place. It is no coincidence that Google, Facebook, and Amazon were founded back when Barlow’s cyberspace was still a thing.

The Internet Computer

Dfinity’s internet computer offers an alternative. On the normal internet, both data and software are stored on specific computers—servers at one end and laptops, smartphones, and game consoles at the other. When you use an app, such as Zoom, software running on Zoom’s servers sends data to your device and requests data from it.

This traffic is managed by an open standard known as the internet protocol (the IP in IP address). These long-standing rules are what ensure that the video stream of your face finds its way across the internet, from network to network, until it reaches the computers of the other people on the call milliseconds later.

Dfinity is introducing a new standard, which it calls the internet computer protocol (ICP). These new rules let developers move software around the internet as well as data. All software needs computers to run on, but with ICP the computers could be anywhere. Instead of running on a dedicated server in Google Cloud, for example, the software would have no fixed physical address, moving between servers owned by independent data centers around the world. “Conceptually, it’s kind of running everywhere,” says Dfinity engineering manager Stanley Jones.

In practice, it means that apps can be released that nobody owns or controls. Data centers will be paid a fee, in crypto tokens, by the app developers for running their code, but they won’t have access to the data, making it hard for advertisers to track your activity across the internet. “I don’t want to hammer the data privacy angle too much because, honestly, ad-tech continues to surprise me with its audacity,” says Jones. Still, he says, the internet computer should change the game.

A less welcome upshot is that a free-for-all internet could also make it difficult to hold app makers accountable. Who is on the other end of the phone if you need to take down illegal or abusive content? It’s a concern, says Jones. But he points out that it isn’t really any easier with Facebook: “You say, hey, can you take down these videos? They say no. It kind of depends on how Zuckerberg is feeling that day.”

In fact, a decentralized internet may lead to a decentralized form of governance, in which developers and users all have a say in how it is regulated—much as Barlow wanted. This is the ideal adopted in the crypto world. But as we’ve seen with Bitcoin and Ethereum, it can lead to infighting between cliques. It is not clear that mob rule would be better than recalcitrant CEOs.  

Still, Dfinity and its backers are confident these issues will get worked out down the line. In 2018, Dfinity raised $102 million in a crypto token sale that valued the network at $2 billion. Investors include Andreessen Horowitz and Polychain Capital, both big players in the Silicon Valley venture capital club.

It is also moving fast. This week, Dfinity showed off a TikTok clone called CanCan. In January it demoed a LinkedIn-alike called LinkedUp. Neither app is being made public, but they make a convincing case that apps made for the internet computer can rival the real things. 

Remaking the internet

But Dfinity is not the first to try to remake the internet. It joins a list of organizations developing a range of alternatives, including Solid, SAFE Network, InterPlanetary File System, Blockstack, and others. All draw on the techno-libertarian ideals embodied by blockchains, anonymized networks like Tor and peer-to-peer services like BitTorrent.

Some, like Solid, also have all-star backing. The brainchild of Tim Berners-Lee, who came up with the basic design for the web in 1989, Solid provides a way for people to keep control of their personal data. Instead of handing over their data to apps like Facebook or Twitter, users store it privately, and apps must request what they need.

But Solid also shows how long it takes to change the status quo. Though it is a less ambitious proposal than Dfinity’s internet computer, Solid has been working on its core technology for at least five years. Berners-Lee talks about correcting the course of the internet. Yet overcoming the inertia of an internet pulled along by juggernauts like Google and Amazon is hard. Inventing the web is one thing; reinventing it is another.

Other projects tell a similar story. The SAFE Network, a peer-to-peer alternative to the internet in which data is shared across all the hard drives of participating computers rather than in central data centers, has been a work in progress for 15 years. An open-source community of developers have built a handful of apps for the network, including a Twitter clone called Patter and a music-player app called Jams.  “My sole goal is to take data away from the corporations and put it back with the people,” says founder David Irvine. But he admits that the SAFE Network itself is still nowhere near public release.

Lalana Kagal at MIT’s Computer Science and Artificial Intelligence Lab, who is the project manager for Solid, admits that progress is slow. “We haven’t seen as much adoption as we could have,” she says.

Even when Solid is ready for full release, Kagal expects that only people who really worry about what happens to their personal data will make the switch. “We’ve been talking about privacy for 20 years and people care about it,” she says. “But when it comes to actually taking action, nobody wants to leave Facebook.”

Even within the niche communities of developers working to make a new internet, there is little awareness of rival projects. Neither Irvine nor the three people I emailed who had worked on Solid, including Kagal, had heard of Dfinity. People I spoke to at Dfinity had not heard of the SAFE network.

It’s possible that the internet may be forced to change whether the average user cares or not. “Privacy regulations could become so restrictive that companies will be forced to move to a more decentralized model,” says Kagal. “They might realize that storing and collecting all this personal information is just not worth their while anymore.”

But all of this assumes that the internet can be weaned off its core business model of advertising, which determines both the minutiae of data collection and the balance of power at the top. Dfinity believes that making the internet a free market again will lead to a boom in innovation like the one we saw in the dot-com days, with startups exploring new ways to make money that don’t rely on indiscriminate processing of personal data. Kagal hopes that more people will choose to pay for services rather than using freemium ones that make money from ads. 

None of this will be easy. In the years since Barlow wrote his polemic, the data economy has sunk deep roots. “It would be great if it was replaced with Solid,” says Kagal. “But it would be great if it was replaced with something else as well. It just needs to be done.”

Podcast: Covid-19 has exposed a US innovation system that is badly out of date

Ilan Gur always wanted to build things. But after finishing his PhD in material science at UC Berkeley, he says he “bounced around, feeling like a misfit.” He left the publish-or-perish world of academia, and burned through a few million dollars before realizing that venture capital isn’t the right way to fund applied research, either.

If solving a problem like pandemic preparedness isn’t immediately profitable, the market won’t solve it, Gur, who founded the fellowship programs Cyclotron Road and Activate, now argues. That’s why he thinks the US needs a new way to allot R&D funds based on impact, not profits, and in an  essay for the July issue of Technology Review, he calls for a new playbook for government funding of applied research. We sat down with him to learn more about why the current system of R&D funding is out of date, and how a new one could help the US better address its current needs as well as prepare for the future. 

Show Notes and Links

How the US lost its way on innovation, June 17, 2020

Why venture capital doesn’t build the things we really need, June 17, 2020

Cyclotron Road


Full Episode Transcript

Ilan Gur: Who was going to spend the money on developing solutions to a pandemic that didn’t yet exist? 

Wade Roush: Ilan Gur runs a fellowship program designed to help more scientists and engineers turn their ideas into products.

Ilan Gur: That’s a market failure that industry just isn’t going to solve by itself, but where you need industry’s involvement to develop those practical solutions. And so then the question becomes, how do we do that?

Wade Roush: In Ilan’s view, America’s whole system for moving basic research to the marketplace is sorely outdated, and this disconnect helps explain why the country was caught unprepared when the pandemic hit. He wrote about the problem for the latest issue of Technology Review. And we’ll talk with him about the three big steps he thinks we should take to get R&D back in sync with our practical needs. I’m Wade Roush, and this is Deep Tech.

[Deep Tech theme]

Wade Roush: If you were a kid in the 1980s you might remember this public service announcement from cartoons on Saturday morning TV.

National Science Foundation public service announcement:

To know the world from A to Z

Discovery science and technology

Astronomy, biology, chemistry, zoology

Science and technology—it’s fun, you’ll see!

A public service message from the National Science Foundation

Wade Roush:  For all its cuteness, that old PSA is a pretty good reflection of the way the federal government has funded basic science ever since World War II. Meaning, the money has mostly gone toward building up fundamental disciplines like astronomy, biology, chemistry, and zoology, on the theory that a stream of new scientific knowledge would eventually turn the wheels of private enterprise. 

Ilan Gur thinks that was the right philosophy when the National Science Foundation was getting its start back in 1950, when most basic research was confined to universities and big industrial labs. But it may not work so well today, when innovation can bubble up in all sorts of places, including startups, and when it seems like we can’t always trust the marketplace to guide innovation toward our most pressing needs.   

Ilan  is a PhD material scientist based in Berkeley, California, and the founder of a fellowship program for scientist-entrepreneurs called Cyclotron Road. He’s also the CEO of a nonprofit called Activate that’s working to replicate the Cyclotron Road model in other locations. His essay  “How the US Lost Its Way on Innovation” is in the July issue of Technology Review.

Ilan Gur: We’ve got such a rich infrastructure for innovation in the United States and yet there’s so much holding us back from realizing the potential of that infrastructure. The essay is really about the idea that because of the way the research innovation system in the US has been organized, and because we haven’t had many opportunities to take a fresh look at that organization—those organizing principles—we end up with a lot of stranded opportunities to get the most value about from all the great talent and ideas that we have in the country, both to advance science, but also to make sure that the scientific underpinnings we have can be powerful tools to respond to the needs of society. With covid-19 being a really prime case study and example.

Wade Roush: Ilan says he’s been inspired to see how many researchers are mobilizing in the pandemic to try new ideas in areas like testing and vaccines and medical equipment. But he also thinks they’re scrambling to make up for a very late start.

Ilan Gur: As scientists, when historically we have looked at what are the greatest threats to society, including some of the greatest existential threats, pandemics, global pandemics are always at the top of that list. And it’s never been a question of if, it’s always a question of when. Why, when it did happen, did we not have the tools to address it ready? You know, that’s certainly not just a question for science. It’s a question for government and a question for policy and a question of where our priorities are and how we invest. But for me, it’s an indicator that there’s something missing in the way that we’re organized, in the way that we’re prepared to have science and engineering really make the impact we want.

Wade Roush: I’m really curious about Cyclotron Road, which is an actual road in Berkeley, right? But it’s also the name of an organization that you created back in 2014. So what is it? And what’s the mission?

Ilan Gur: You know, my own personal experience, feeling like a bit of a misfit, navigating these different institutions from academia to venture to government funding where I ended up was with this strong sense that each of these institutions had a really strong role in how we advance science. You know, universities are really well set up to do the ideation and do the investment in talent. Corporations are really well set up to take technology and drive it to products and distribute it. My deep interest was in how do you do that step of translating what’s coming out of the research lab into something that ends up at the doorstep of the market as a product. And what was missing for me is, who owns that part of the journey institutionally? I couldn’t find the place that owned that part of the journey. Because of that, there was a lot of stranded talent and ideas in the country coming out of our scientific institutions. And that seemed like a really big missed opportunity. And so what I wondered was, well, what if you built a home specifically for these folks who had become cutting edge experts in science and engineering who were motivated, who wanted to take that research to the next step and translate into a product, but they didn’t feel like they had the right support mechanism to do that. And we basically designed Cyclotron Road as what would be the perfect environment to support people in that transition.

Wade Roush: Ilan says Lawrence Berkeley National Lab agreed to host the program. The lab is named after physicist Ernest O. Lawrence, the inventor of the cyclotron, hence the name.

Ilan Gur: The basic construct of that program is we run a competition once a year. We say if you’re a top of your class scientist or engineer and you want to take the next step in moving your ideas out of the research lab. But you’re caught between these two worlds, right? What you’re working on is too applied for academia or a traditional research lab. But too speculative for private investment. Come here and we’ll support you for two years with a fellowship that allows you to focus on that transition. And that’s proved to be a really powerful model in the early data that we’ve gotten and the organization I now run, Activate, is a nonprofit that’s basically set up to take that experiment that we ran at Cyclotron Road in Berkeley and figure out how to expand that and offer that opportunity to more scientists and engineers around the country. 

Wade Roush: So in a way, you’re trying to reinvent applied research. But one of the points you make in your piece is that we actually kind of used to know how to do this and that there was, in effect, a wonderful, almost golden age of cooperation between government and business after World War II. At some point, maybe starting in the 70s and 80s, that all fell apart. And I wanted to get your diagnosis of what went wrong. I think the way you put it in the article was we fell asleep at the wheel.

Ilan Gur: The first thing to realize is pre-World War II, the US government did very little when it comes to funding science education and scientific research. And that’s important, right? University work was basically in the domain of philanthropy, as far as I understand it. And the real powerhouse for scientific research, including more fundamental research, was within big companies, if you think of the Duponts, the Bell Labs. So that was the kind of pre-World War II state. All of that changed in World War II. And the simple way to think about that is to just fast forward to the end, which is, you know, you could argue that the outcome of the war really turned on science and technology and engineering. We developed radar. We developed the bomb. The outcome was clear that that was an investment that paid off for the country. After the war, there was a big question, OK? Now what? We just mobilized all this funding, but we never thought about like, what should that role be outside of the World War?

Wade Roush: Ilan points out that one of the leading voices in this debate was Vannevar Bush, a former dean of MIT’s School of Engineering who had helped to create both the radar project and the atomic bomb project. Bush argued in a report to President Truman that it was now time for a massive government investment in basic research.

Ilan Gur: What resulted from that is essentially the entire science policy and research infrastructure that we have in the US today, NSF, NIH, the national lab system, et cetera. That was a really thoughtful position and a really thoughtful argument for the time. But if you look at it, we’ve leveraged that same policy framework and perspective since the 1950s through today with very little deviance, even though the world has changed a lot. The reason I use the words “fall asleep at the wheel” in the essay is because no one stopped to recognize that the assumptions from post-World War II no longer hold. We went from scientific talent and ideas being a core bottleneck that the government had to support to now, where I’d argue that we have at least a healthy supply, if not an oversupply of scientific talent and ideas. And what we’re missing is the capacity to translate those ideas into products and businesses.

Wade Roush: You outline three key steps that the nation could take to revitalize research and development. I’m curious about what kind of world you think might emerge if people took these three pieces of advice seriously. So the first one is, “Don’t just fund research, fund solutions.” Can you say a little more about what you mean by that? What does that mean to you when you say “funding solutions?”

Ilan Gur: The example I give in the article is it’s really easy to look up how much funding was spent on bio sciences research in the country. It’s very hard to look up how much funding was spent on pandemic preparedness and response. And the reason for that is because the entire system is organized around, if we think back to the history, right, early government funding went towards universities and government labs, it went towards fundamental research. So it was all built around the disciplines and the incentives of those organizations. You have a physics department, you have a math department, you have a computer science department. The National Science Foundation lets you look up data on where the government spends money on research. If you look it up, you can sort that data by field of science. You can’t sort that data by what problems were we actually trying to solve with any of those research dollars.

Ilan Gur: The incentives are also generally around knowledge creation, right? They’re around publishing papers. They are around advancing science. What if I want to be a cutting-edge scientist and work for an organization that cares about how to drive that science into applications. Who’s going to write me a paycheck to do that speculative work? Right. And so that’s I think that’s part of the dislocation. I would never suggest that we shouldn’t be funding fundamental and disciplinary research. We need that. That’s where the sort of seeds for everything we’re talking about in terms of value and impact comes from. But it would be nice to have a balance.

Wade Roush: Your second policy recommendation is that we need to get over our aversion to funding industry. And I guess what you mean is that government needs to be more open to sending research dollars to startups or to tech companies. Right?  What would be some of the key steps to actually enacting that recommendation?

Ilan Gur: One of things I’ve learned about government is, you know, rightfully there’s a stewardship element there, which is if I’m going to spend taxpayer dollars, I should make sure that I’m not wasting them. And one of the risks in wasting taxpayer dollars in research is that you spend money on something that the private sector would have done otherwise. And so there’s a real concern around this idea of let’s not be redundant with the private sector and let’s make sure that research expenditures are addressing a market failure, something that wouldn’t otherwise happen. I think one of the important things we need to recognize is that there are a lot of market failures. And covid-19 is a great example of this. Who was going to spend the money on developing solutions to a pandemic that didn’t yet exist? You know, that’s a market failure that industry just isn’t going to solve by itself, but where you need industry’s involvement to develop those practical solutions. And so then the question becomes, how do we do that? How do we get over our aversion to funding industry and how do we fund it responsibly?

Ilan Gur: Is it as simple as just taking the government’s funding and having the government fund more research in industry similar to what it used to or just more research? I had an interesting conversation with the CTO of a major industrial company in the US. And he said, “Well, here’s a problem. If the government started putting more money into the company that I was the CTO of”—he’s the former CTO—”to do like really speculative kind of research and early translation, my company wouldn’t know what to do with that money.” We don’t have the capacity within these big industrial companies to do that type of innovation anymore. And what this person said to me was, you know, right now that type of innovation is really happening from startups, right? Big companies are pulling in innovation by gobbling up startups. And there’s such a richness in science-based startups and the early stage, innovative research that’s happening there. I think one of the important lessons and takeaways for me is the government is really poorly positioned to fund research within startups and it’s a huge missed opportunity.

Wade Roush: So your third recommendation is “focus on what matters for the future.” What I’m curious about here is who should get to decide what matters. Funding is such an inherently political process, right? So how do we decide that?

Ilan Gur: The simple answer to this is, you know, we have a government system to think about what are the priorities to serve our society. And so ultimately, you know, we need that government system to operate and figure out what those priorities are. This is actually a great opportunity for me to mention sort of one of my heroes and mentors in this space. Arati Prabhakar is the former director of DARPA, but she’s also one of these folks who in her career has transcended and crossed between different worlds. She spent time as a venture capitalist, as the CEO of a company, in government both at NIST and then as the head of DARPA. And she points out something really interesting—and this relates to the history we were talking about—which is if you go to those founding documents around how we should say. The science and innovation infrastructure of the country after World War Two. You read Vannevar Bush’s famous essay…

Wade Roush: The Endless Frontier.

Ilan Gur: The Endless Frontier. You do a keyword search in that essay. Guess what? You won’t find the word Internet. You won’t find the word privacy. You won’t find the word climate change. You won’t find anything about gene engineering. There’s always the question of what’s the priority right now. But certainly over the course of decades, we can agree that major priorities for how science needs to serve societies have shifted. And there are new categories of priorities. And there are new approaches that have emerged. And there are new institutional frameworks. Startups. Right? You know, entrepreneurship. And so the question becomes, how can those changes be reflected in the organizing principles and the way we fund and support research in the country? You know, we had an Atomic Energy Commission and funding. Right. Should there be one of those on climate change, given what we know? I don’t know the answer. But certainly there should be a conversation about it.

Wade Roush: Right. Right. So you’re saying we need to be able to be more flexible, both in terms of our sort of shift from discipline to discipline to meet whatever the current needs are, and maybe willing to invent new institutions, whole new organizational structures around science funding, and not be caught up in whatever model was invented 50 years ago.

Ilan Gur: Yeah, and those are not easy changes to make. I think some in the policy world would say those are nearly impossible changes to make. I think it’s one of the reasons why it’s so important to be having this dialogue right now in light of covid-19, because I think there is an openness right now to thinking about, well, you know, how should we build the research innovation infrastructure for the future to be better? Right. So, you know that leaves me optimistic. Regardless of how you feel about the response to covid-19 or otherwise, you know, fundamentally what’s great about working in science is that it’s about optimism. Right? It’s about the future. It’s about hope. And so I would just say, you know, we should be inspired by all the work that scientists and engineers are doing right now to get ahead of covid-19. We should celebrate that and we should be amazed by what we can accomplish with science, if we’ve got the motivation and the support to do it.

Wade Roush: That’s it for this edition of Deep Tech. This is a podcast we’re making exclusively for subscribers of MIT Technology Review, to help bring alive the ideas our journalists are writing and thinking about.

You can find Ilan Gur’s full essay in the July issue of Technology Review, which also features the TR35. It’s a list of 35 innovators under the age of 35 who are working to advance technology in areas like  photovoltaics, batteries, and machine vision. For more than 20 years readers have been looking to our list to find out who’s up and coming in science, engineering, and entrepreneurship, and whose inventions are going to change the world. Check out the whole list at

Deep Tech is written and produced by me and edited by Jennifer Strong and Michael Reilly. Our theme is by Titlecard Music and Sound in Boston. I’m Wade Roush. Thanks for listening, and we hope to see you back here in two weeks for our next episode. 

There’s not one reason California’s covid-19 cases are soaring—there are many

It’s troubling, though not surprising, to see covid-19 cases spiking across the American South and Southwest, where public officials delayed lockdowns, rushed to reopen businesses, or refused to require people to wear masks.

But what’s the matter with California? The nation’s most populous state was the first to enact statewide shelter-in-place rules, took decisive steps to build up the recommended testing and case tracing capacity, and has hammered the public health message on social distancing and masks.

Yet new cases are rising sharply in pockets throughout the sprawling state, even as they’re flat or falling across much of the East Coast. Positive tests over the last seven-day period have risen 45%, regularly topping 5,000 a day, Governor Gavin Newsom said during a press conference on Monday. Hospitalizations and intensive care unit admissions are both up around 40% over the past few weeks as well, threatening to overwhelm health-care systems.

In turn, Newsom has pressed Imperial County—the southernmost part of the state, where skyrocketing case loads have forced officials to move hundreds of patients to hospitals in neighboring areas—to fully reinstate stay-at-home orders. He’s also recommended or required that more than a dozen counties shut down their bars or keep them closed, including Los Angeles and Santa Clara, the home of Silicon Valley. Meanwhile, San Francisco’s mayor halted the city’s reopening plan on Friday.

So what’s driving the outbreaks in a state that supposedly did things right? Why weren’t its ambitious testing and contact tracing programs adequate to prevent the recent surge in cases?

“It’s not one thing, but four or five,” says George Rutherford, an epidemiologist at University of California, San Francisco, who is leading the university’s training program for the state’s contact tracing task force. “The state is so big—the population of California is larger than Canada—and there’s a lot of different things going on in different places.”

Health officials believe the state’s efforts to boost testing and rapidly track down infections is helping. California’s number of cases per capita—567 per 100,000—is well below the rates for states like Alabama, Arizona, or Florida. And Rutherford says about 85% of the people known to have interacted with positive patients are returning calls or answering questions from the state’s contact tracers, who are tasked with tracking down possible infections and encouraging people to quarantine or isolate themselves.

But clearly not enough people are strictly following these recommendations, and others, from public health officials—sometimes due to carelessness, and sometimes because of financial strains and other constraints.

Here are some of the main drivers at work:

Ethnic disparities

Throughout the state, Latinos make up by far the largest share of cases (56%) and deaths (42%), according to data from the California Department of Public Health. While Latinos make up 39% of the population, whites are a close second at 37% but represent only 17% of covid-19 cases.

These infections appear to be concentrated within low-income communities, where people are often essential workers who can’t do their jobs from home, can’t afford to call in sick and may live in crowded housing conditions, according to information from contact tracing programs as well as other research and reporting. Language, immigration status and financial issues can complicate efforts to successfully reach infected patients or their close contacts in these communities, and convince them to isolate themselves for extended periods.

Early results from a covid-19 screening project in San Francisco’s heavily Hispanic Mission neighborhood found that 95% of those who tested positive were “Hispanic or Latinx” (the difference is explained here). And 90% of infected patients said they couldn’t work from home.

People are becoming cavalier

Another major factor is that people are ignoring safety practices, according to a state breakdown of counties experiencing rising cases. As regions relax stay-at-home rules, families, friends, and strangers are increasingly gathering in homes, bars, restaurants, and other venues. Too often, they’re not wearing masks or staying far enough away from each other, said Mark Ghaly, secretary of California’s Health and Human Services Agency, during the Monday press conference.

Los Angeles County has become the nation’s largest epicenter of the disease, with nearly 98,000 confirmed cases, according to Johns Hopkins University’s coronavirus tracking map.

The Los Angeles County Department of Public Health announced on Sunday that it would heed Newsom’s directive to shut down bars, noting that the region’s sharp increase in cases and hospitalizations directly coincides with the reopening of businesses a few weeks earlier. Those include breweries, pubs, wineries, and other venues “where people remove their face covering to drink while they may be socializing with people not in their households,” the statement read.

“I implore that our residents and businesses follow the public health directives that will keep us healthy, safe, and on the pathway to recovery,” said Barbara Ferrer, the county’s director of public health. “Otherwise, we are quickly moving toward overwhelming our health-care system and seeing even more devastating illness and death.”

Explosions in prison cases

More than 2,500 state and federal prison inmates throughout California are infected with the coronavirus. More than 1,000 prisoners and staff members tested positive in San Quentin State Prison alone during the last few weeks, in an outbreak linked to the transfer of inmates from the California Institution for Men in Chino, where there are more than 500 active cases.

The spillover of patients into local hospitals has forced Marin County, where San Quentin is based, to pause its plans to reopen gyms, hotels, and other businesses.

An influx of cases from elsewhere

A variety of other factors are driving higher case counts, including increasingly widespread testing across the state (which totaled nearly 106,000 on Sunday), continuing outbreaks in nursing homes in several counties, and patients from outside California crowding into counties with better testing and treatment.

Part of what’s driving the soaring case loads in Imperial County is the influx of positive patients from Mexico. State officials say they’re primarily US citizens, hundreds of thousands of whom live in neighboring Baja, crossing back in search of superior health care.

The county has by far the state’s highest case numbers on a per capita basis, 3,414 per 100,000, as well as a positivity rate for tests that’s more than four times the state average.

The different drivers demand different interventions, health experts say. Officials need to make extra efforts to communicate with low-income Latino patients and provide money, food, housing, or other services to help them isolate while they’re infectious. (San Francisco has some programs like this in place, but clearly more are needed throughout the state.) Prison systems need to keep infected inmates isolated, and ensure that they’re no longer spreading the disease across facilities. And nursing homes should test patients and workers more often, and step in more rapidly at the earliest signs of an outbreak.

But pretty much all of this has been known from the start. Californians need to recognize that the dangers haven’t passed, even as regions relax certain rules. Everyone still has to maintain their distance from others, vigorously wash their hands, and abide by the one public health decree that may help the most.

“Wear masks,” UCSF’s Rutherford says.

Is it safe to send kids back to school?

Covid-19 has been disruptive and bewildering for everyone, but especially for children. In the UK and in most US states, schools closed in March. Many of them will keep their doors shut until the fall. That’s six months without the normality of a school day, not to mention a significant break without any formal education for the many children who cannot access online classes.

It’s a global issue. Schools have had to close in 191 countries, affecting more than 1.5 billion students and 63 million teachers, according to the United Nations. But in many countries, schools are now cautiously reopening: in Germany, Denmark, Vietnam, New Zealand, and China, children are mostly back behind their desks. These countries all have two things in common: low levels of infection and a reasonably firm ability to trace outbreaks. 

What about the UK or the US, where the number of cases is relatively high and tracing systems are still in the early stages? How will we know when it’s safe for children to return? There can never be a cast-iron guarantee. But for parents to be able to gauge the level of risk, there are three questions that need answering. How susceptible are children to covid-19? How badly does it affect them? And do they spread it to others?

We know that children are less likely to catch covid-19 than adults. They’re about half as likely, to be precise, according to a recent study by the London School of Hygiene & Tropical Medicine (LSHTM) using data from China, Italy, Japan, Singapore, Canada, and South Korea, published in Nature Medicine. A survey of 149,760 people with covid-19 by the US Centers for Disease Control and Prevention found that children 17 and under, who make up 22% of the US population, account for fewer than 2% of confirmed infections across the United States. 

These findings were supported by a meta-analysis of 18 studies carried out by researchers at University College London, which found that under-18s were 56% less likely to catch coronavirus from an infected person than adults. On the flip side, children are likely to have more close contact with others than adults do, especially in a school, which could potentially mitigate the protective benefit they get from being less likely to catch the virus in the first place. Even so, the numbers look promising.

If children do become infected in spite of this, how badly does it affect them?

The LSHTM study suggests that when children catch covid-19, they usually get very mild effects. Only one in five of those aged 10-19 had any clinical symptoms, compared with 69% of adults over 70. Children are extremely unlikely to die from coronavirus: during the peak nine weeks of the pandemic in England and Wales, just five children 14 and under died, out of a population of almost 11 million in that age group, according to official data analyzed by David Spiegelhalter, a statistician at Cambridge University. A preprint in the journal Public Health found that across seven countries up to May 19, there were 44 covid-19 deaths out of over 137 million children 19 and under. That’s a rate of less than 1 in 3 million. There is an unpleasant new covid-linked inflammatory syndrome in children similar to Kawasaki disease, but it’s extremely rare. “I think there have been fewer than 500 cases reported worldwide,” says Tina Hartert, a medicine professor at the Vanderbilt Institute for Infection, Immunology, and Inflammation in Nashville, Tennessee. The message seems to be that parents should not worry unduly about what might happen to their kids should they catch the virus. 

The final crucial question: to what extent do children spread the coronavirus once infected? “If you look at the peer-reviewed literature, it’s very mixed. The simple answer is we don’t know,” says Jeffrey Shaman, an infectious diseases expert at Columbia University. A nine-year-old boy with coronavirus in the French Alps in February did not transmit the virus to anyone else despite exposure to more than 170 people, including close contact within schools. However, we shouldn’t read too much into a study of one. On the other hand, researchers from Berlin University tested 3,712 covid-19 patients, 127 of whom were under 20, and concluded that children can carry the same viral load as adults, which seems to correlate with infectiousness.

One of the biggest fears is that a child could pick up the coronavirus at school and then bring it home to Grandma. “The risk to the kids is low, and it’s not bad for me or my partner, but I do worry about them going back to school and then seeing my parents,” says Kirsten Minshall, a father of two boys aged 9 and 11 who lives in a seaside town in Kent in the UK.

It is possible for children to introduce covid-19 into their household—a study from China identified three occasions when a child under 10 was the “index case” in a home. But it seems to be rare.

The crux of the issue is data, or more precisely a lack of it. Because children are less likely to catch covid-19, and are likely to have milder symptoms if they do, they are less likely to be seen by doctors or tested. That means high-quality, reliable data on this question is hard to come by. 

A large National Institutes of Health–funded study in the US that launched last month should help. It’s going to test nasal swabs from nearly 2,000 families in 10 cities every two weeks. The aim is to work out what role children play in transmission, says Hartert, who is leading the study. Enrollment has just finished, and she expects the first results within weeks. 

Population-wide serological surveys—which test for the presence of antibodies against covid-19 in blood samples—will also help plug the data gap. Studies comparing areas where schools have reopened and those where they have not could be hugely helpful, too. If it ends up being the case that children are less susceptible to infection, that suggests closing schools won’t be a very important way to reduce transmission across society, says Rosalind Eggo, an infectious disease modeler at LSHTM, who was involved in the study. However, she warns that it’s tricky to disentangle the closure of schools from all the other actions that were taken at the start of the pandemic.

“It’s very difficult to work out what happened to transmission when schools closed, because that happened at the same time as a lot of other interventions, like a general lockdown, distancing, and increased hygiene,” she says.

But none of this addresses a major group, without which no school can function: teachers. 

“Some teachers will be elderly, and there’s no easy answer for them. They’re incredibly high risk,” says Hartert. Many of the schools that have reopened around the world have introduced distancing measures and schedules that minimize contact between school groups. 

“I’m less afraid teaching than I am going to the supermarket,” says Marleen Slingenbergh, the head of biology at Alexandra Park School in London, where some schools have reopened for a small proportion of their students. She says that’s because the school has prioritized safety⁠—students have to sanitize their hands between lessons, teachers are required to stay at least two meters away from students at the front of the class, and there are strict “one at a time” bathroom policies, for example. 

That said, the majority of students haven’t returned yet. Slingenbergh fears it won’t be possible to maintain the safety measures when school returns in full in September. “With one week on, one week off, it’s possible. When we have 1,600 students, it will be tricky, especially during the changeover between lessons,” she says. 

Ultimately, the crucial thing for schools may be their ability to respond flexibly—closely monitoring for any potential outbreaks and quickly closing when necessary. 

There is, understandably, a lot of pressure from parents to keep their children safe, and many are still not comfortable with sending them back to school, says Slingenbergh. But most of them recognize it’s a delicate balance. “It’s all about weighing up the risks of covid, the kids getting proper schooling, and looking after their mental health,” Minshall says.