SpaceX’s Starship has finally stuck the landing on its third flight

On March 3, SpaceX’s Starship pulled off a successful high-altitude flight—its third in a row. Unlike in the first two missions, the spacecraft stuck the landing. Then, as in the last two, the spacecraft blew up.

What happened: At around 5:14 p.m. US Central Time, the 10th Starship prototype (SN10) was launched from SpaceX’s test facility in Boca Chica, Texas, flying about 10 kilometers into the air before falling back down and descending safely to Earth. 

About 10 minutes later, the spacecraft blew up, from what appears to have been a methane leak. Still, the actual objectives of the mission were met.

What’s the big deal? This is the first time Starship has landed safely after a high-altitude flight. SN8 was flown on December 9 and went up 12.5 km into the air before it crashed in an explosive wreck when it hit the ground too fast. SN9, flown February 2 to 10 km in altitude, experienced virtually the same fate during its attempted landing. Both missions attempted to use only two of the spacecraft’s three engines to land. SN10, on the other hand, utilized all three, nailing the vertical landing, albeit ending up a little lopsided. 

What’s the Starship? It’s the vehicle that SpaceX is developing to one day send astronauts to the moon, Mars, and other destinations beyond Earth’s orbit. It’s 50 meters tall, weighs over 1,270 metric tons when loaded with fuel, and is supposed to be able to take more than 100 tons of cargo and passengers into deep space. In its final form, Starship sits on top of the Super Heavy rocket (currently in development) and doubles as a second-stage booster. Both the Super Heavy and Starship itself will use the company’s methane-fueled Raptor engines.

What’s next: That’s not entirely clear. SpaceX has now proved that Starship can fly high into the air and land safely. SN11 might undergo the same flight, or the company might subject it to some other testing. But SpaceX is definitely closer to its goal to fly Starship into space sometime this year. CEO Elon Musk has previously expressed hopes of launching people to Mars by 2026 or even 2024

“We’ll never have true AI without first understanding the brain”

The search for AI has always been about trying to build machines that think—at least in some sense. But the question of how alike artificial and biological intelligence should be has divided opinion for decades. Early efforts to build AI involved decision-making processes and information storage systems that were loosely inspired by the way humans seemed to think. And today’s deep neural networks are loosely inspired by the way interconnected neurons fire in the brain. But loose inspiration is typically as far as it goes.

Most people in AI don’t care too much about the details, says Jeff Hawkins, a neuroscientist and tech entrepreneur. He wants to change that. Hawkins has straddled the two worlds of neuroscience and AI for nearly 40 years. In 1986, after a few years as a software engineer at Intel, he turned up at the University of California, Berkeley, to start a PhD in neuroscience, hoping to figure out how intelligence worked. But his ambition hit a wall when he was told there was nobody there to help him with such a big-picture project. Frustrated, he swapped Berkeley for Silicon Valley and in 1992 founded Palm Computing, which developed the PalmPilot—a precursor to today’s smartphones.

But his fascination with brains never went away. Fifteen years later, he returned to neuroscience and set up the Redwood Center for Theoretical Neuroscience (now at Berkeley). Today he runs Numenta, a neuroscience research company based in Silicon Valley. There he and his team study the neocortex, the part of the brain responsible for everything we associate with intelligence. After a string of breakthroughs in the last few years, Numenta changed its focus from brains to AI, applying what it has learned about biological intelligence to machines.

Hawkins’s ideas have inspired big names in AI, including Andrew Ng, and drawn accolades from the likes of Richard Dawkins, who wrote an enthusiastic foreword to Hawkins’s new book A Thousand Brains: A New Theory of Intelligence, published March 2.

I had a long chat with Hawkins on Zoom about what his research into human brains means for machine intelligence. He’s not the first Silicon Valley entrepreneur to think he has all the answers—and not everyone is likely to agree with his conclusions. But his ideas could shake up AI.  

Our conversation has been edited for length and clarity.

Why do you think AI is heading in the wrong direction at the moment?

That’s a complicated question. Hey, I’m not a critic of today’s AI. I think it’s great; it’s useful. I just don’t think it’s intelligent.

My main interest is brains. I fell in love with brains decades ago. I’ve had this attitude for a long time that before making AI, we first have to figure out what intelligence actually is, and the best way to do that is to study brains.

Back in 1980, or something like that, I felt the approaches to AI were not going to lead to true intelligence. And I’ve felt the same through all the different phases of AI—it’s not a new thing for me.

I look at the progress that has been made recently with deep learning and it’s dramatic, it’s pretty impressive—but that doesn’t take away from the fact that it’s fundamentally lacking. I think I know what intelligence is; I think I know how brains do it. And AI is not doing what brains do.

Are you saying that to build an AI we somehow need to re-create a brain?

No, I don’t think we’re going to build direct copies of brains. I’m not into brain emulation at all. But we’re going to need to build machines that work along similar principles. The only examples we have of intelligent systems are biological systems. Why wouldn’t you study that?

It’s like I showed you a computer for the first time and you say, “That’s amazing! I’m going to build something like it.” But instead of looking at it, trying to figure out how it works, you just go away and start trying to make something from scratch.

So what is it brains do that’s crucial to intelligence that you think AI needs to do too?

There are four minimum attributes of intelligence, a kind of baseline. The first is learning by moving: we cannot sense everything around us at once. We have to move to build up a mental model of things, even if it’s only moving our eyes or hands. This is called embodiment.

Next, this sensory input gets taken up by tens of thousands of cortical columns, each with a partial picture of the world. They compete and combine via a sort of voting system to build up an overall viewpoint. That’s the thousand brains idea. In an AI system, this could involve a machine controlling different sensors—vision, touch, radar and so on—to get a more complete model of the world. Although, there will typically be many cortical columns for each sense, such as vision. 

Then there’s continuous learning, where you learn new things without forgetting previous stuff. Today’s AI systems can’t do this. And finally, we structure knowledge using reference frames, which means that our knowledge of the world is relative to our point of view. If I slide my finger up the edge of my coffee cup, I can predict that I’ll feel its rim, because I know where my hand is in relation to the cup.

Your lab has recently shifted from neuroscience to AI. Does that correspond to your thousand brains theory coming together?

Pretty much. Up until two years ago, if you walked into our office, it was all neuroscience. Then we made the transition. We felt we’d learned enough about the brain to start applying it to AI.

What kinds of AI work are you doing?

One of the first things we looked at was sparsity. At any one time, only 2% of our neurons are firing; the activity is sparse. We’ve been applying this idea to deep-learning networks and we’re getting dramatic results, like 50 times speed-ups on existing networks. Sparsity also gives you more robust networks, lower power consumption. Now we’re working on continuous learning.

It’s interesting that you include movement as a baseline for intelligence. Does that mean an AI needs a body? Does it need to be a robot?

In the future I think the distinction between AI and robotics will disappear. But right now I prefer the word “embodiment,” because when you talk about robots it conjures up images of humanlike robots, which isn’t what I’m talking about. The key thing is that the AI will have to have sensors and be able to move them relative to itself and the things it’s modeling. But you could also have a virtual AI that moves in the internet.

This idea is quite different from a lot of popular ideas about intelligence, of a disembodied brain.

Movement is really interesting. The brain uses the same mechanisms to move my finger over a coffee cup, or move my eyes, or even when you’re thinking about a conceptual problem. Your brain moves through reference frames to recall facts that it has stored in different locations.

The key thing is that any intelligent system, no matter what its physical form, learns a model of the world by sensing different parts of it, by moving in it. That’s bedrock; you can’t get away from that. Whether it looks like a humanoid robot, a snake robot, a car, an airplane, or, you know, just a computer sitting on your desk scooting around the internet—they’re all the same.

How do most AI researchers feel about these ideas?

The vast majority of AI researchers don’t really embrace the idea that the brain is important. I mean, yes, people figured out neural networks a while ago, and they’re kind of inspired by the brain. But most people aren’t trying to replicate the brain. It’s just whatever works, works. And today’s neural networks are working well enough.

And most people in AI have very little understanding of neuroscience. It’s not surprising, because it’s really hard. It’s not something you just sit down and spend a couple of days reading about. Neuroscience itself has been struggling to understand what the hell’s going on in the brain.

But one of the big goals of writing this book was to start a conversation about intelligence that we’re not having. I mean, my ideal dream is that every AI lab in the world reads this book and starts discussing these ideas. Do we accept them? Do we disagree? That hasn’t really been possible before. I mean, this brain research is less than five years old. I’m hoping it’ll be a real turning point.

How do you see these conversations changing AI research?

As a field, AI has lacked a definition of what intelligence is. You know, the Turing test is one of the worst things that ever happened, in my opinion. Even today, we still focus so much on benchmarks and clever tricks. I’m not trying to say it’s not useful. An AI that can detect cancer cells is great. But is that intelligence? No. In the book I use the example of robots on Mars building a habitat for humans. Try to imagine what kind of AI is required to do that. Is that possible? It’s totally possible. I think at the end of the century, we will have machines like that. The question is how do we get away from, like, “Here’s another trick” to the fundamentals needed to build the future.

What did Turing get wrong when he started the conversation about machine intelligence?

I just mean that if you go back and read his original work, he was basically trying to get people to stop arguing with him about whether you could build an intelligent machine. He was like, “Here’s some stuff to think about—stop bothering me.” But the problem is that it’s focused on a task. Can a machine do something a human can do? And that has been extended to all the goals we set for AI. So playing Go was a great achievement for AI. Really? [laughs] I mean, okay.

The problem with all performance-based metrics, and the Turing test is one of them, is that it just avoids the conversation or the big question about what an intelligent system is. If you can trick somebody, if you can solve a task with some sort of clever engineering, then you’ve achieved that benchmark, but you haven’t necessarily made any progress toward a deeper understanding of what it means to be intelligent.

Is the focus on humanlike achievement a problem too?

I think in the future, many intelligent machines will not do anything that humans do. Many will be very simple and small—you know, just like a mouse or a cat. So focusing on language and human experience and all this stuff to pass the Turing test is kind of irrelevant to building an intelligent machine. It’s relevant if you want to build a humanlike machine, but I don’t think we always want to do that.

You tell a story in the book about pitching handheld computers to a boss at Intel who couldn’t see what they were for. So what will these future AIs do?

I don’t know. No one knows. But I have no doubt that we will find a gazillion useful things for intelligent machines to do, just like we’ve done for phones and computers. No one anticipated in the 1940s or 50s what computers would do. It’ll be the same with AI. It’ll be good. Some bad, but mostly good.

But I prefer to think of this in the long term. Instead of asking “What’s the use of building intelligent machines?” I ask “What’s the purpose of life?” We live in a huge universe in which we are little dots of nothing. I’ve had this question mark in my head since I was a little kid. Why do we care about anything? Why are we doing all this? What should our goal be as a species?

I think it’s not about preserving the gene pool: it’s about preserving knowledge. And if you think about it that way, intelligent machines are essential for that. We’re not going to be around forever, but our machines could be.

I find it inspirational. I want a purpose to my life. I think AI—AI as I envision it, not today’s AI—is a way of essentially preserving ourselves for a time and a place we don’t yet know.

Rocket Lab could be SpaceX’s biggest rival

In the private space industry, it can seem that there’s SpaceX and then there’s everyone else. Only Blue Origin, backed by its own billionaire founder in the person of Jeff Bezos, seems able to command the same degree of attention. And Blue Origin hasn’t even gone beyond suborbital space yet. 

Rocket Lab might soon have something to say about that duopoly. The company, founded in New Zealand and headquartered in Long Beach, California, is second only to SpaceX when it comes to launch frequency—the two are ostensibly the only American companies that regularly go to orbit. Its small flagship Electron rocket has flown 18 times in just under four years and delivered almost 100 satellites into space, with only two failed launches. 

On March 1, the company made its ambitions even clearer when it unveiled plans for a new rocket called Neutron. At 40 meters tall and able to carry 20 times the weight that Electron can, Neutron is being touted by Rocket Lab as its entry into markets for large satellite and mega-constellation launches, as well as future robotics missions to the moon and Mars. Even more tantalizing, Rocket Lab says Neutron will be designed for human spaceflight as well. The company calls it a “direct alternative” to the SpaceX Falcon 9 rocket

“Rocket Lab is one of the success stories among the small launch companies,” says Roger Handberg, a space policy expert at the University of Central Florida. “They are edging into the territory of the larger, more established launch companies now—especially SpaceX.”

That ambition was helped by another bit of news announced on March 1: Rocket Lab’s merger with Vector Acquisition Corporation. Joining forces with a special-purpose acquisition company, a type of company that ostensibly enables another business to go public without an IPO, will allow Rocket Lab to benefit from a massive influx of money that gives it a new valuation of $4.1 billion. Much of that money is going toward development and testing of Neutron, which the company wants to start flying in 2024.

It’s a bit of an about-face for Rocket Lab. CEO Peter Beck had previously been lukewarm about the idea of building a larger rocket that could launch bigger payloads and potentially offer launches for multiple customers at once

But the satellite market has embraced ride-share missions into orbit, especially given the rise of satellite mega-constellations, which will probably make up most satellites launched into orbit over the next decade. Neutron is capable of taking 8,000 kilograms to low Earth orbit, which means it could deliver potentially dozens of payloads to orbit at once. As a lighthearted mea culpa, the introductory video for Neutron showed Beck eating his own hat. 

Neutron puts Rocket Lab in closer competition with SpaceX in other ways as well. Both companies are investing in cheaper spaceflight through the use of lower-cost materials and reusable systems. Neutron’s first-stage booster will be designed to land vertically on an ocean platform, just like the first-stage booster of the SpaceX Falcon 9. 

The comparisons to SpaceX don’t stop there. Rocket Lab claims that Neutron will be designed so it can be certified to fly human missions into orbit and to the International Space Station—just as SpaceX currently does. Neutron’s design is comparable to that of the Russian Soyuz launch vehicle, which can take trios of astronauts to the ISS. And of course, both companies are interested in missions to destinations beyond orbit. Neutron will be able to send 2,000 kg payloads to the moon and 1,500 kg payloads to Mars and Venus.

There are still some differences. Unlike SpaceX with its Crew Dragon vehicle, Rocket Lab isn’t building its own crew capsules yet. If Neutron can start taking humans into orbit, it’s not clear exactly what vehicles can launch on top of it. Rocket Lab isn’t building an interplanetary spaceship like Starship. And it’s not trying to create a global satellite internet service like Starlink. Rocket Lab’s only big project outside of rockets is the Photon satellite bus (the infrastructure for the spacecraft that usually tells ground control where the satellite is in orbit). 

And Neutron won’t simply work out of the box. Making it reusable will require test after test (not even Electron is fully reusable yet). Neutron’s engine design is too big and complex to simply be adapted from Electron, so the company must start from scratch and figure out how it’s going to scale production all over again.

Unsurprisingly, human spaceflight will be the company’s big challenge. “Their new vehicle will make them more competitive for payloads,” says Handberg. “Human spaceflight is more problematic.” The ISS is one destination, but Rocket Lab will be competing with SpaceX and Boeing for such contracts. Maybe it can find business through space tourism, but that industry is still in its infancy.

And then there’s safety. “Rocket Lab has a competition problem, but that will be secondary to the cost of making the new vehicle human rated and building an infrastructure to support operations,” says Handberg. “SpaceX is one flight mishap away from problems with the Crew Dragon vehicle.” The failed Starliner test flight in December 2019 pushed back Boeing’s hopes of sending people into space by over a year. With Neutron, though, the daylight between SpaceX and Rocket Lab has suddenly shrunk when it comes to impact on the commercial space industry. If and when Neutron is ready to fly in 2024, they will be even closer.

Should the US start prioritizing first vaccine doses to beat the variants?

The vaccine rollout in the United States has been sluggish, hampered by manufacturing delays, logistical challenges, and freak snowstorms. Demand far outstrips supply. 

Meanwhile, the more transmissible variant circulating widely in the UK is gaining a foothold in the US. Modeling by the Centers for Disease Control and Prevention suggests it will quickly become the dominant strain, bringing a surge in cases, hospitalizations, and deaths. “That hurricane is coming,” said Michael Osterholm, director of the Center for Infectious Disease Research and Policy (CIDRAP) at the University of Minnesota, in a January 31 appearance on the weekly US TV news show Meet the Press.

To save lives, the federal government needs to quickly pivot to a new vaccination strategy, say a growing number of public health experts. 

The two vaccines that have been administered in the US, from Pfizer and Moderna, require two doses spaced three or four weeks apart. But studies suggest that even the first dose provides good protection against illness. (A third, one-shot vaccine from Johnson & Johnson received FDA authorization on February 27, and the first doses should be dispensed this week). Rather than sticking to the dosing schedule, some experts argue that health officials should prioritize getting one shot into as many high-risk individuals as possible. The booster shot could wait until the surge passes and more doses are available.

In a report released on February 23, Osterholm and his colleagues calculate that temporarily prioritizing first doses for those over the age of 65 might save as many as 39,000 lives. “There is a narrow and rapidly closing window of opportunity to more effectively use vaccines and potentially prevent thousands of severe cases, hospitalizations, and deaths in the next weeks and months,” the authors write. 

The UK adopted a similar strategy in December, and Quebec announced in January that it would stop holding back booster shots and try to vaccinate as many people as possible, delaying the second shot for up to 90 days. 

But many public health experts, including senior advisors in the Biden administration, argue that there isn’t enough data to support a switch to a one-dose strategy. They worry that deferring the second dose will leave people vulnerable to infection, and potentially give rise to new variants that can evade the immune response. And there are logistics to consider. Switching strategies now would complicate the rollout, says Céline Gounder, an epidemiologist at the NYU Grossman School of Medicine and member of the Biden administration’s covid-19 Advisory Board. “You’re really having to break the current system, which is already very fragile,” she says. It also might hamper the public’s already tenuous trust in the vaccine. 

“Given the information we have right now, we will stick with the scientifically documented efficacy and optimal response of a prime followed by a boost,” said Anthony Fauci, President Biden’s chief medical advisor, in a press briefing on February 19. Andy Slavitt, White House senior advisor on the COVID-19 response, agreed. “The recommendation from the FDA is two doses, just as it always has been,” he said. 

The big protection question

The debate hinges on how much protection one dose really offers and how long that protection lasts. 

In the large clinical trials, Moderna and Pfizer saw good efficacy even before the second shot. The first dose of the Pfizer vaccine provided 52% protection against symptomatic covid-19, and the Moderna shot achieved efficacy of 80%. But those figures included the days immediately after vaccination, when the immune system is still ramping up its response. When researchers looked at efficacy two weeks from the date of the shot, they found much higher numbers. One analysis suggests the Pfizer vaccine reached nearly 92% efficacy before the second shot. The first dose of the Moderna shot  was 92% efficacious after two weeks. 

And new research hints that one dose might offer some protection in a real-world setting too. In a new study in the New England Journal of Medicine, researchers examined medical records from nearly 600,000 vaccinated individuals in Israel and the same number of controls. The first dose of the Pfizer vaccine was 46% effective against SARS-CoV-2 infection between days 14 and 20. The shot did an even better job of preventing hospitalization and death: protection was 74% and 72%, respectively.

Another study out of Israel, which has yet to be peer reviewed and published, looked at more than 350,000 people who received one dose of the Pfizer vaccine. They compared the number of SARS-CoV-2 infections in the first 12 days — before the immune system had learned to recognize the virus spike protein — with the number of infections from days 13 to 24. They found that the first dose led to a 51% reduction in confirmed SARS-Cov-2 infections, with or without symptoms.

Findings from the UK also seem to bolster support for a delayed second dose strategy. One preprint study included roughly 19,000 healthcare workers in England who received the Pfizer vaccine. A single dose of the vaccine reduced the risk of infection by 72% after three weeks. 

But Gounder points out that it’s not clear how long protection lasts. “This is somewhat of a data-free zone.” 

One fear is that protection will wane quickly without a booster shot, and researchers did see some evidence to support that in Scotland. In a preprint paper, the team reported that one dose of the Pfizer vaccine showed 85% protection against hospitalization about a month after immunization. But then that protection began to decline, falling from 85% at its peak to 58% in people who were six weeks or more past the date of their first shot.   

Rise of the variants

Waning protection is not the only cause for concern. Some experts fear that having a plethora of partially immunized people could fuel the rise of new, immune-evading variants. People who receive one shot have lower levels of antibodies, which might leave them vulnerable to infection. If the virus replicates and mutates in the presence of a partial immune response, mutants with the ability to evade the immune system might be at an advantage. Fauci brought this idea up in a recent press conference. 

Andrew Read, a disease ecologist at the Pennsylvania State University Center for Infectious Disease Dynamics, has been studying vaccine resistance for two decades. He acknowledges that administering single shots could give rise to new variants. But this theoretical concern is less important than getting as many people as possible protected right now. 

“The history of vaccination shows that, even when these variants arise, they never make vaccines useless,” he says. The shots might be less effective, but they won’t fail completely. And if new variants did arise, vaccine manufacturers could retool the vaccines to address the problem. Moderna says it is already creating a new vaccine designed to target the variant first reported in South Africa, for example.

Sarah Cobey, an evolutionary biologist at the University of Chicago doesn’t think the variant fears should halt a move towards getting as many doses administered as possible. For most people, covid-19 is a “flash in the pan” infection, not one that lingers. That doesn’t leave much time for selective pressure. “I don’t think that we’re going to see repeated emergence of escape variants,” she says. When these variants do arise, partially immunized people will be more susceptible than people who have been fully vaccinated. But as long as the vaccine still provides some protection, the rate of spread and prevalence should drop. 

Indeed, Cobey and her colleagues have drafted a white paper, which will be posted soon, arguing that prioritizing first doses might actually curb the rise of new variants. “When you have larger viral populations and higher growth rates of viral populations, you have faster evolutionary change,” says Marc Lipsitch, an epidemiologist at the Harvard T.H. Chan School of Public Health and co-author on the paper. “The best way to reduce variant spread and adaptive evolution by the virus in general is to chop down its population as much as possible.” 

Path forward

Even if not all experts agree that prioritizing first doses is the best approach, there may be some middle ground. Mounting evidence suggests that people who have previously had covid-19 have a robust response to the first dose and may not need a second shot. “We believe that covid survivors only need a single dose to achieve the same level of antibody titers and neutralization,” says Viviana Simon, a microbiologist at the Icahn School of Medicine at Mount Sinai. Limiting people who have recovered from covid to one dose could free up millions of doses. Splitting doses could also help address the shortage. A new study suggests that two half doses of the Moderna vaccine elicit roughly the same antibody levels as the full dose. 

In an ideal world, the vaccines would be plentiful and everyone would receive two full doses on schedule. But given the current situation, Read says, the path seems clear. “We’re trying to make the best of a bad job here. There’s not enough vaccines to go around,” he says. Given that we can’t achieve perfection, “we need to get as many people as possible as immune as possible as fast as possible.” 

Recovering from the SolarWinds hack could take 18 months

Fully recovering from the SolarWinds hack will take the US government from a year to as long as 18 months, according to the head of the agency that is leading Washington’s recovery.

The hacking campaign against American government agencies and major companies was first discovered in November 2020. At least nine federal agencies were targeted, including the Department of Homeland Security and the State Department. The attackers, who US officials believe to be Russian, exploited a product made by the US software firm SolarWinds in order to hack government and corporate targets.

Brandon Wales, the acting director of CISA, the US Cybersecurity and Infrastructure Agency, says that it will be well into 2022 before officials have fully secured the compromised government networks . Even fully understanding the extent of the damage will take months.

“I wouldn’t call this simple,” Wales says. “There are two phases for response to this incident. There is the short-term remediation effort, where we look to remove the adversary from the network, shutting down accounts they control, and shutting down entry points the adversary used to access networks. But given the amount of time they were inside these networks—months—strategic recovery will take time.”

“Given the amount of time they were inside these networks… strategic recovery will take time.”

Brandon Wales, CISA

When the hackers have succeeded so thoroughly and for so long, the answer sometimes can be a complete rebuild from scratch. The hackers made a point of undermining trust in targeted networks, stealing identities, and gaining the ability to impersonate or create seemingly legitimate users in order to freely access victims’ Microsoft 365 and Azure accounts. By taking control of trust and identity, the hackers become that much harder to track.

“Most of the agencies going through that level of rebuilding will take in the neighborhood of 12 to 18 months to make sure they’re putting in the appropriate protections,” Wales says. 

American intelligence agencies say Russian hackers first infiltrated in 2019. Subsequent investigation has shown that the hackers started using the company’s products to distribute malware by March 2020, and their first successful breach of the US federal government came early in the summer. That’s a long time to go unnoticed—longer than many organizations keep the kind of expensive forensic logs you need to do the level of investigation required to sniff the hackers out.

SolarWinds Orion, the network management product that was targeted, is used in tens of thousands of corporations and government agencies. Over 17,000 organizations downloaded the infected back door. The hackers were extraordinarily stealthy and specific in targeting, which is why it took so long to catch them—and why it’s taking so long to understand their full impact.

The difficulty of uncovering the extent of the damage was summarized by Brad Smith, the president of Microsoft, in a congressional hearing last week. 

“Who knows the entirety of what happened here?” he said. “Right now, the attacker is the only one who knows the entirety of what they did.”

Kevin Mandia, CEO of the security company FireEye, which raised the first alerts about the attack, told Congress that the hackers prioritized stealth above all else.

“Disruption would have been easier than what they did,” he said. “They had focused, disciplined data theft. It’s easier to just delete everything in blunt-force trauma and see what happens. They actually did more work than what it would have taken to go destructive.”

“This has a silver lining”

CISA first heard about a problem when FireEye discovered that it had been hacked and notified the agency. The company regularly works closely with the US government, and although it wasn’t legally obligated to tell anyone about the hack, it quickly shared news of the compromise with sensitive corporate networks.

It was Microsoft that told the US government federal networks had been compromised. The company shared that information with Wales on December 11, he said in an interview. Microsoft observed the hackers breaking into the Microsoft 365 cloud that is used by many government agencies. A day later, FireEye informed CISA of the back door in SolarWinds, a little-known but extremely widespread and powerful tool. 

This signaled that the scale of the hack could be enormous. CISA’s investigators ended up working straight through the holidays to help agencies hunt for the hackers in their networks.

These efforts were made even more complicated because Wales had only just taken over at the agency: days earlier, former director Chris Krebs had been fired by Donald Trump for repeatedly debunking White House disinformation about a stolen election. 

While headlines about the firing of Krebs focused on the immediate impact on election security, Wales had a lot more on his hands. 

The new man in charge at CISA is now faced with what he describes as “the most complex and challenging” hacking incident the agency has come up against.

The hack will almost certainly accelerate the already apparent rise of CISA by increasing its funding, authority, and support. 

CISA was recently given the legal authority to persistently hunt for cyber threats across the federal government, but Wales says the agency lacks the resources and personnel to carry out that mission. He argues that CISA also needs to be able to deploy and manage endpoint detection systems on computers throughout the federal government in order to detect malicious behavior. Finally, pointing to the fact that the hackers moved freely throughout the Microsoft 365 cloud, Wales says CISA needs to push for more visibility into the cloud environment in order to detect cyber espionage in the future.

In the last year, supporters of CISA have been pushing for it to become the nation’s lead cybersecurity agency. An unprecedented cybersecurity disaster could prove to be the catalyst it needs.

“This has a silver lining,” said Mark Montgomery, who served as executive director of the Cyberspace Solarium Commission, in a phone call. “This is among the most significant malicious cyber acts ever conducted against the US government. The story will continue to get worse for several months as more understanding of what happened is revealed. That will help focus the incoming administration on this issue. They have a lot of priorities, so it would be easy for cyber to get lost in the clutter. That’s not going to happen now.”

Israel’s “green pass” is an early vision of how we leave lockdown

The commercial opens with a tempting vision and soaring instrumentals. A door swings wide to reveal a sunlit patio and a relaxed, smiling couple awaiting a meal. “How much have we missed going out with friends?” a voiceover asks. “With the green pass, doors simply open in front of you … We’re returning to life.” It’s an ad to promote Israel’s version of a vaccine passport, but it’s also catnip for anyone who’s been through a year in varying degrees of lockdown. Can we go back to normal life once we’ve been vaccinated? And if we can, what kind of proof should we need?

Although there are still many unknowns about vaccines, and many practical issues surrounding implementation, those considering vaccine passport programs include airlines, music venues, Japan, the UK, and the European Union

Some proponents, including those on one side of a fierce debate in Thailand, have focused on ending quarantines for international travelers to stimulate the hard-hit tourism industry. Others imagine following Israel’s lead, creating a two-tiered system that allows vaccinated people to enjoy the benefits of a post-pandemic life while others wait for their shots. What is happening there gives us a glimpse of the promise—and of the difficulties such schemes face.

How it works

Israel’s vaccine passport was released on February 21, to help the country emerge from a month-long lockdown. Vaccinated people can download an app that displays their “green pass” when they are asked to show it. The app can also display proof that someone has recovered from covid-19. (Many proposed passport systems offer multiple ways to show you are not a danger, such as proof of a recent negative test. The Israeli government says that option will come to the app soon, which will be especially useful for children too young to receive an approved vaccine.) Officials hope the benefits of the green pass will encourage vaccination among Israelis who have been hesitant, many of whom are young. 

“People who get vaccinated need to know that something has changed for them, that they can ease up,” says Nadav Eyal, a prominent television journalist. “People want to know that they can have some normalcy back.”

Despite the flashy ads, however, it’s still too early to tell how well Israel’s program will work in practice—or what that will mean for vaccine passports in general. Some ethicists argue that such programs may further entrench existing inequalities, and this is already happening with Israel’s pass, since few Palestinians in the occupied territories of Gaza and the West Bank have access to vaccines

The green pass is also a potential privacy nightmare, says Orr Dunkelman, a computer science professor at Haifa University and a board member of Privacy Israel. He says the pass reveals information that those checking credentials don’t need to know, such as the date a user recovered from covid or got a vaccine. The app also uses an outdated encryption library that is more vulnerable to security breaches, Orr says. Crucially, because the app is not open source, no third-party experts can vet whether these concerns are founded.

“This is a catastrophe in the making,” says Ran Bar Zik, a software columnist for the newspaper Haaretz. 

Zik recommends another option currently available under the green pass program: downloading a paper vaccination certificate instead of using the app. Although that’s possible, the app is expected to become the most widespread verification method.

Unnecessarily complicated

In the US, developers are trying to address such privacy concerns ahead of any major rollout. Ramesh Raskar runs the PathCheck Foundation at MIT, which has partnered with the design consultancy Ideo on a low-tech solution. Their prototype uses a paper card, similar to the one people currently receive when they’re vaccinated. 

The paper card could offer multiple forms of verification, scannable in the form of QR codes, allowing you to show a concert gatekeeper only your vaccination status while displaying another, more information-heavy option to health-care providers. 

“Getting on a bus, or getting into a concert, you need to have a solution that is very easy to use and that provides a level of privacy protection,” he says. But other situations may require more information: an airline wants to know that you are who you say you are, for example, and hospitals need accurate medical records. 

It’s not just about making sure you don’t have to hand over personal information to get into a bar, though: privacy is also important for those who are undocumented or who mistrust the government, Raskar says. It’s important for companies not to create another “hackable repository” when they view your information, he adds. 

He suggests that right now commercial interests are getting in the way of creating something so simple—it wouldn’t make much money for software companies, which at least want to show off something that could be repurposed later in a more profitable form. Compared with Israel, he says, “we’re making things unnecessarily complicated in the US.” 

The way forward

It’s unclear what the US—which, unlike Israel, doesn’t have a universal identity record or a cohesive medical records system—would need to do to implement a vaccine passport quickly. 

But whichever options eventually do make it into widespread use, there are also aspects of this idea that don’t get laid out in the ads. For example, proposals have been floated that would require teachers and medical staff to provide proof of vaccination or a negative test to gain admittance to their workplaces. 

That could be overly intrusive on individual privacy rights, says Amir Fuchs, a researcher at the Israel Democracy Institute. Still, he says, “most people understand that there is a logic in that people who are vaccinated will have less limitations.”

Despite the progress in delivering vaccines, all these passport efforts are all still in the early stages. PathCheck’s idea hasn’t rolled out yet, although pilots are under discussion. In Denmark, vaccine passports are still more a promise than a plan. And even in Israel, the vision put forward by government advertising is still just an ambition: while pools and concert venues may be open to green pass holders, dining rooms and restaurants aren’t open yet—for anybody.

This story is part of the Pandemic Technology Project, supported by the Rockefeller Foundation.

Hackers are finding ways to hide inside Apple’s walled garden

You’ve heard of Apple’s famous walled garden, the tightly controlled tech ecosystem that gives the company unique control of features and security. All apps go through a strict Apple approval process, they are confined so sensitive information isn’t gathered on the phone, and developers are locked out of places they’d be able to get into in other systems. The barriers are so high now that it’s probably more accurate to think of it as a castle wall. 

Virtually every expert agrees that the locked-down nature of iOS has solved some fundamental security problems, and that with these restrictions in place, the iPhone succeeds spectacularly in keeping almost all the usual bad guys out. But when the most advanced hackers do succeed in breaking in, something strange happens: Apple’s extraordinary defenses end up protecting the attackers themselves.

“It’s a double-edged sword,” says Bill Marczak, a senior researcher at the cybersecurity watchdog Citizen Lab. “You’re going to keep out a lot of the riffraff by making it harder to break iPhones. But the 1% of top hackers are going to find a way in and, once they’re inside, the impenetrable fortress of the iPhone protects them.”

Marczak has spent the last eight years hunting those top-tier hackers. His research includes the groundbreaking 2016 “Million Dollar Dissident” report that introduced the world to the Israeli hacking company NSO Group. And in December, he was the lead author of a report titled “The Great iPwn,” detailing how the same hackers allegedly targeted dozens of Al Jazeera journalists.

He argues that while the iPhone’s security is getting tighter as Apple invests millions to raise the wall, the best hackers have their own millions to buy or develop zero-click exploits that let them take over iPhones invisibly. These allow attackers to burrow into the restricted parts of the phone without ever giving the target any indication of having been compromised. And once they’re that deep inside, the security becomes a barrier that keeps investigators from spotting or understanding nefarious behavior—to the point where Marczak suspects they’re missing all but a small fraction of attacks because they cannot see behind the curtain.

This means that even to know you’re under attack, you may have to rely on luck or vague suspicion rather than clear evidence. The Al Jazeera journalist Tamer Almisshal contacted Citizen Lab after he received death threats about his work in January 2020, but Marczak’s team initially found no direct evidence of hacking on his iPhone. They persevered by looking indirectly at the phone’s internet traffic to see who it was whispering to, until finally, in July last year, researchers saw the phone pinging servers belonging to NSO. It was strong evidence pointing toward a hack using the Israeli company’s software, but it didn’t expose the hack itself.

Sometimes the locked-down system can backfire even more directly. When Apple released a new version of iOS last summer in the middle of Marczak’s investigation, the phone’s new security features killed an unauthorized “jailbreak” tool Citizen Lab used to open up the iPhone. The update locked him out of the private areas of the phone, including a folder for new updates—which turned out to be exactly where hackers were hiding.

Faced with these blocks, “we just kind of threw our hands up,” says Marczak. “We can’t get anything from this—there’s just no way.” 

Beyond the phone

Ryan Stortz is a security engineer at the firm Trail of Bits. He leads development of iVerify, a rare Apple-approved security app that does its best to peer inside iPhones while still playing by the rules set in Cupertino. iVerify looks for security anomalies on the iPhone, such as unexplained file modifications—the sort of indirect clues that can point to a deeper problem. Installing the app is a little like setting up trip wires in the castle that is the iPhone: if something doesn’t look the way you expect it to, you know a problem exists.

But like the systems used by Marczak and others, the app can’t directly observe unknown malware that breaks the rules, and it is blocked from reading through the iPhone’s memory in the same way that security apps on other devices do. The trip wire is useful, but it isn’t the same as a guard who can walk through every room to look for invaders.

“You’re going to keep out a lot of the riffraff by making it harder to break iPhones. But the 1% of top hackers are going to find a way in and, once they’re inside, the impenetrable fortress of the iPhone protects them.”

Bill Marczak, Citizen Lab

Despite these difficulties, Stortz says, modern computers are converging on the lockdown philosophy—and he thinks the trade-off is worth it. “As we lock these things down, you reduce the damage of malware and spying,” he says.

This approach is spreading far beyond the iPhone. In a recent briefing with journalists, an Apple spokesperson described how the company’s Mac computers are increasingly adopting the iPhone’s security philosophy: its newest laptops and desktops run on custom-built M1 chips that make them more powerful and secure, in part by increasingly locking down the computer in the same ways as mobile devices.

“iOS is incredibly secure. Apple saw the benefits and has been moving them over to the Mac for a long time, and the M1 chip is a huge step in that direction,” says security researcher Patrick Wardle.

Macs were moving in this direction for years before the new hardware, Wardle adds. For example, Apple doesn’t allow Mac security tools to analyze the memory of other processes—preventing apps from checking any room in the castle aside from their own.

These rules are meant to safeguard privacy and prevent malware from accessing memory to inject malicious code or steal passwords. But some hackers have responded by creating memory-only payloads—code that exists in a place where Apple doesn’t allow outside security tools to pry. It’s a game of hide and seek for those with the greatest skill and most resources.

“Security tools are completely blind, and adversaries know this,” Wardle says.

It’s just not Apple, says Aaron Cockerill, chief strategy officer at the mobile security firm Lookout: “Android is increasingly locked down. We expect both Macs and ultimately Windows will increasingly look like the opaque iPhone model.” 

“We endorse that from a security perspective,” he says, “but it comes with challenges of opacity.”

In fact, Google’s Chromebook—which limits the ability to do anything outside the web browser—might be the most locked-down device on the market today. Microsoft, meanwhile, is experimenting with Windows S, a locked-down flavor of its operating system that is built for speed, performance, and security. 

These companies are stepping back from open systems because it works, and security experts know it. Bob Lord, the chief security officer for the Democratic National Committee, famously recommends that everyone who works for him—and most other people, too—only use an iPad or a Chromebook for work, specifically because they’re so locked down. Most people don’t need vast access and freedom on their machine, so closing it off does nothing to harm ordinary users and everything to shut out hackers. 

But it does hurt researchers, investigators, and those who are working on defense. So is there a solution?

Making the trade-offs

In theory, Apple could choose to grant certain entitlements to known defenders with explicit permission from users, allowing a little more freedom to investigate. But that opens doors that can be exploited. And there is another consequence to consider: every government on earth wants Apple’s help to open up iPhones. If the company created special access, it’s easy to imagine the FBI knocking, a precarious position Apple has spent years trying to avoid.

“I would hope for a framework where either the owner of a device or someone they authorize can have greater forensic abilities to see if a device is compromised,” Marczak says. “But of course that’s tough, because when you enable users to consent to things, they can be maliciously socially engineered. It’s a hard problem. Maybe there are engineering answers to reduce social engineering but still allow researchers access to investigate device compromise.”

Apple and independent security experts are in agreement here: there is no neat fix. Apple strongly believes it is making the correct trade-offs, a spokesperson said recently in a phone interview. Cupertino argues that no one has convincingly demonstrated that loosening security enforcement or making exceptions will ultimately serve the greater good.

Consider how Apple responded to Marczak’s latest report. Citizen Lab found that hackers were targeting iMessage, but no one ever got their hands on the exploit itself. Apple’s answer was to completely re-architect iMessage with the app’s biggest security update ever. They built the walls higher and stronger around iMessage so that exploiting it would be an even greater challenge.

“I personally believe the world is marching toward this,” Stortz says. “We are going to a place where only outliers will have computers—people who need them, like developers. The general population will have mobile devices which are already in the walled-garden paradigm. That will expand. You’ll be an outlier if you’re not in the walled garden.”

The one-shot vaccine from Johnson & Johnson now has FDA support in the US

An advisory board to the US Food and Drug Administration voted unanimously in favor of the first single-shot covid-19 vaccine, clearing the path for the health agency to authorize its immediate use as soon as tomorrow.

The one-shot vaccine, developed by Johnson & Johnson, has the additional advantage of being easy to store, because it requires nothing colder than ordinary refrigerator temperatures. It stopped 66% of mild and serious covid-19 cases in a trial carried out on three continents.

It will join a US covid arsenal that already includes authorized vaccines from Moderna and Pfizer. Those vaccines, which use messenger RNA, were significantly more effective (they stopped about 95% of cases), but they require two shots, and the doses need to be stored at ultra-cold temperatures.

Globally, a growing list of injections developed in Russia, China, India, and the United Kingdom all are starting to see wide use.

While the new J&J vaccine isn’t as effective as those made using messenger RNA technology, health officials said that shouldn’t dissuade people from getting it, since it still sharply reduces the chance of illness and death.

“To have two is fine, and having three is absolutely better,” Anthony Fauci, the country’s chief virologist, said during an interview on NBC. “It’s more choices and increases the supply. It will certainly contribute to getting control.”

In the US, there have been approximately 28 million confirmed cases of covid-19 and 500,000 deaths.

The limited supplies of the Moderna and Pfizer shots mean most Americans are still waiting to be vaccinated. About 1.4 million doses of those two vaccines were given each day last week in the US. At that pace it would take about a year to vaccinate the whole nation.

In theory, an easily stored single-shot vaccine could kick up the pace. In practice, though, supply shortages of the J&J vaccine could limit the role it plays in the US vaccination campaign. In testimony before Congress this week, Johnson & Johnson said it had only 4 million shots ready to go, a third of the initial supply promised, and would deliver only 20 million doses by the end of March.

“I wonder if the J&J vaccine is going to be a significant part of the US landscape,” says Eric Topol, a doctor at the Scripps Research Institute, who called initial supplies “paltry” given that the company received extensive government support.

The vaccine also has what Topol called a “notable dropdown in efficacy overall” compared with messenger RNA shots, although many health experts this week rushed to defend the vaccine against any suggestion it was inferior.

“Everything we’ve seen so far says these are excellent vaccines,” Ashish Jha, a health policy researcher and doctor at Brown University, wrote on Twitter, where he argued that comparing “headline efficacy” among vaccines can be misleading since “they all are essentially 100% at preventing hospitalizations [and] deaths once they’ve kicked in.”

New shot

The new one-shot vaccine, called Ad26.COV2.S, was developed by Johnson & Johnson using work from Beth Israel Deaconess Medical Center in Boston. It employs a harmless viral carrier, adenovirus 26, which can enter cells but doesn’t multiply or grow. Instead, the carrier is used to drop off gene instructions that tell a person’s cells to make the distinctive coronavirus spike protein, which in turn trains the immune system to combat the pathogen.

The New York Times published a detailed graphical explanation of how the vaccine works.

Richard Nettles, vice president of US medical affairs at Janssen, a J&J subsidiary, told Congress during testimony on February 23 that production of the vaccine is “highly complex” and said the company was working to manufacture the shots at eight locations, including a US site in Maryland.

The manufacturing is complicated because the vaccine virus is grown in living cells before it is purified and bottled. Making a batch of virus takes two months, which is why there is no way to immediately increase supplies if timelines are missed.

Indeed, the biggest disappointment around the new vaccine is a supply shortfall caused by manufacturing problems. Jeffrey Zients, coordinator of President Biden’s covid-19 task force, said during a White House press conference on Wednesday, February 24, that the new administration had only “learned that J&J was behind on manufacturing” when it took office five weeks ago.

“It was disappointing when we arrived,” he said. “The initial production ramp … was slower than we’d like.”

Pretty effective

In late January, the company announced results from a 45,000-person study it carried out in the US, South Africa, and South America, in which people got either the vaccine or a placebo.

Overall, the vaccine was 66% effective in stopping covid-19, and somewhat better at stopping severe disease. In the trial, for instance, seven people died of covid-19, but all of these were in the placebo arm. Also, its effects increased with time—after a month, no one in the vaccine arm had to go to the hospital for covid-19.

Johnson & Johnson claims it will not be making a profit from the vaccine, which will also be sold outside the US. Instead, Nettles said, the vaccine will be sold at a single “not-for-profit” price to all countries “for emergency pandemic use.”

Nettles didn’t say what that price would be, but the US agreed last year to pay the company about $1 billion for a guarantee of 100 million doses and has given the company a similar amount of development funding, making it one of the major investments of Operation Warp Speed, as the vaccine effort was known during the Trump administration.

Shortage to surplus

At least for the moment, vaccine supply remains a limiting factor in the US inoculation campaign, which has seen 70 million doses administered since it began in December, according to Bloomberg. “I don’t see an excess of vaccine for a while,” says Peter Hotez, a virologist and vaccine developer at the Baylor College of Medicine.

All told, the US will have received enough shots to fully vaccinate 130 million Americans by the end of March, when projected supplies from Pfizer, Moderna, and J&J are tallied together.

Still, vaccine shortages could turn to excess before summer, creating a situation in which it’s no longer vaccines that are in short supply, but people willing or eligible to receive them.

That is because in the US, children under 18 make up about a quarter of the population but aren’t yet allowed to receive the shots. As well, about 30% of American adults claim they won’t get a covid-19 vaccine at all. Children and vaccine doubters together make up half the population.

By August, the three companies say, they will deliver the US enough vaccines for 400 million people, or more than the country’s population. That does not account for a fourth vaccine, manufactured by Novavax, that may also win US authorization.

“By the summer we will be in good shape. The question is how we navigate this space between now and June,” says Hotez.

Growing arsenal

The Johnson & Johnson shot joins a growing worldwide list of approved vaccines that includes the two messenger RNA vaccines, injections from AstraZeneca and Chinese manufacturers, and Russia’s “Sputnik” vaccine, all of which are in use outside the US.

People who get any of the vaccines will, on average, see their chance of dying from covid-19 plummet to near zero. That is down from an overall death rate of around 1.7% of diagnosed cases in the US—and a risk several times higher in elderly people.

The J&J shot has fewer side effects than the mRNA vaccines and has also proved effective against a highly transmissible South African variant of the virus that has accumulated numerous mutations.

The South Africa variant has alarmed researchers because it clearly decreases the effectiveness of some vaccines. A study in South Africa by AstraZeneca found its vaccine didn’t offer protection against the variant at all, causing officials to scrap a plan to distribute the shot there.

According to health minister Zweli Mkhize, South Africa is instead pivoting to the J&J vaccine, with a plan to vaccinate 80,000 health-care workers in the next two weeks.

This week, Moderna also said it would develop a shot tailored against the South African variant, and Pfizer indicated it was also preparing to counter new strains as they arise. Another strategy being contemplated to fend off variants is to give people extra booster doses of the current vaccines.

Some experts in the US continue to urge the government to adopt faster-paced vaccine schemes, like delaying second doses of the messenger RNA shots or using half doses, arguing that the more people who have “good enough” protection, the sooner the pandemic will end.

So far, though, it’s not clear what agency or official would be ready, or even legally authorized, to make that call.

“We are all scratching our heads about who could make that decision,” says Hotez. “And it all depends on how much urgency you feel. The big picture is if you know the numbers are going down, and feel they are going to stay down due to seasonality, then you have some breathing space. But if you are worried about variants, then you have a problem, and you want to vaccinate ahead of schedule.”

On NBC, Fauci said people shouldn’t wait for the best vaccine but take what’s offered. “Even one that may be somewhat less effective is still effective against severe disease, as we have seen with the J&J vaccine,” he said. “Get vaccinated when the vaccine is available to you.”

What is an “algorithm”? It depends whom you ask

Describing a decision-making system as an “algorithm” is often a way to deflect accountability for human decisions. For many, the term implies a set of rules based objectively on empirical evidence or data. It also suggests a system that is highly complex—perhaps so complex that a human would struggle to understand its inner workings or anticipate its behavior when deployed.

But is this characterization accurate? Not always.

For example, in late December Stanford Medical Center’s misallocation of covid-19 vaccines was blamed on a distribution “algorithm” that favored high-ranking administrators over frontline doctors. The hospital claimed to have consulted with ethicists to design its “very complex algorithm,” which a representative said “clearly didn’t work right,” as MIT Technology Review reported at the time. While many people interpreted the use of the term to mean that AI or machine learning was involved, the system was in fact a medical algorithm, which is functionally different. It was more akin to a very simple formula or decision tree designed by a human committee.

This disconnect highlights a growing issue. As predictive models proliferate, the public becomes more wary of their use in making critical decisions. But as policymakers begin to develop standards for assessing and auditing algorithms, they must first define the class of decision-making or decision support tools to which their policies will apply. Leaving the term “algorithm” open to interpretation could place some of the models with the biggest impact beyond the reach of policies designed to ensure that such systems don’t hurt people.

How to ID an algorithm

So is Stanford’s “algorithm” an algorithm? That depends how you define the term. While there’s no universally accepted definition, a common one comes from a 1971 textbook written by computer scientist Harold Stone, who states: “An algorithm is a set of rules that precisely define a sequence of operations.” This definition encompasses everything from recipes to complex neural networks: an audit policy based on it would be laughably broad.

In statistics and machine learning, we usually think of the algorithm as the set of instructions a computer executes to learn from data. In these fields, the resulting structured information is typically called a model. The information the computer learns from the data via the algorithm may look like “weights” by which to multiply each input factor, or it may be much more complicated. The complexity of the algorithm itself may also vary. And the impacts of these algorithms ultimately depend on the data to which they are applied and the context in which the resulting model is deployed. The same algorithm could have a net positive impact when applied in one context and a very different effect when applied in another.

In other domains, what’s described above as a model is itself called an algorithm. Though that’s confusing, under the broadest definition it is also accurate: models are rules (learned by the computer’s training algorithm instead of stated directly by humans) that define a sequence of operations. For example, last year in the UK, the media described the failure of an “algorithm” to assign fair scores to students who couldn’t sit for their exams because of covid-19. Surely, what these reports were discussing was the model—the set of instructions that translated inputs (a student’s past performance or a teacher’s evaluation) into outputs (a score).

What seems to have happened at Stanford is that humans—including ethicists—sat down and determined what series of operations the system should use to determine, on the basis of inputs such as an employee’s age and department, whether that person should be among the first to get a vaccine. From what we know, this sequence wasn’t based on an estimation procedure that optimized for some quantitative target. It was a set of normative decisions about how vaccines should be prioritized, formalized in the language of an algorithm. This approach qualifies as an algorithm in medical terminology and under the broad definition, even though the only intelligence involved was that of humans.

Focus on impact, not input

Lawmakers are also weighing in on what an algorithm is. Introduced in the US Congress in 2019, HR2291, or the Algorithmic Accountability Act, uses the term “automated decisionmaking system” and defines it as “a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers.”

Similarly, New York City is considering Int 1894, a law that would introduce mandatory audits of “automated employment decision tools,” defined as “any system whose function is governed by statistical theory, or systems whose parameters are defined by such systems.” Notably, both bills mandate audits but provide only high-level guidelines on what an audit is.

As decision-makers in both government and industry create standards for algorithmic audits, disagreements about what counts as an algorithm are likely. Rather than trying to agree on a common definition of “algorithm” or a particular universal auditing technique, we suggest evaluating automated systems primarily based on their impact. By focusing on outcome rather than input, we avoid needless debates over technical complexity. What matters is the potential for harm, regardless of whether we’re discussing an algebraic formula or a deep neural network.

Impact is a critical assessment factor in other fields. It’s built into the classic DREAD framework in cybersecurity, which was first popularized by Microsoft in the early 2000s and is still used at some corporations. The “A” in DREAD asks threat assessors to quantify “affected users” by asking how many people would suffer the impact of an identified vulnerability. Impact assessments are also common in human rights and sustainability analyses, and we’ve seen some early developers of AI impact assessments create similar rubrics. For example, Canada’s Algorithmic Impact Assessment provides a score based on qualitative questions such as “Are clients in this line of business particularly vulnerable? (yes or no).”

What matters is the potential for harm, regardless of whether we’re discussing an algebraic formula or a deep neural network.

There are certainly difficulties to introducing a loosely defined term such as “impact” into any assessment. The DREAD framework was later supplemented or replaced by STRIDE, in part because of challenges with reconciling different beliefs about what threat modeling entails. Microsoft stopped using DREAD in 2008.

In the AI field, conferences and journals have already introduced impact statements with varying degrees of success and controversy. It’s far from foolproof: impact assessments that are purely formulaic can easily be gamed, while an overly vague definition can lead to arbitrary or impossibly lengthy assessments.

Still, it’s an important step forward. The term “algorithm,” however defined, shouldn’t be a shield to absolve the humans who designed and deployed any system of responsibility for the consequences of its use. This is why the public is increasingly demanding algorithmic accountability—and the concept of impact offers a useful common ground for different groups working to meet that demand.

Kristian Lum is an assistant research professor in the Computer and Information Science Department at the University of Pennsylvania.

Rumman Chowdhury is the director of the Machine Ethics, Transparency, and Accountability (META) team at Twitter. She was previously the CEO and founder of Parity, an algorithmic audit platform, and global lead for responsible AI at Accenture.