What to expect on Election Day

Just over one week before Election Day, over 60 million Americans have already cast early votes. That dwarfs 2016’s entire early voting total of 47.2 million, and the number is going to keep growing significantly this week.

“This is good news!” wrote Michael McDonald, the University of Florida professor who heads up the US Election Project, which tracks early voting nationally. “There were many concerns about election officials’ ability to conduct an election during a pandemic. Not only are people voting, but they are voting over a longer period of time, thereby spreading out the workload of election officials.”

At a time when there is so much fear, uncertainty, and doubt about American elections (and lots of it unwarranted), it’s important to spotlight that “frankly, it’s going well,” as Benjamin Hovland, chairman of the Electoral Assistance Commission, told me last week.

“We see an opportunity for political actors both domestic and foreign to opportunistically select projections, data, or cases they can use to amplify confusion, to sow doubt.”

Kate Starbird, Election Integrity Partnership

But what about the day itself? What should you be prepared for over the hours, days and weeks after November 3?

“We’re expecting a mess,” says Kate Starbird, a crisis informatics researcher at the University of Washington and one of the lead researchers at the Election Integrity Partnership.

“My plan for Election Day is to start the day very early with a lot of coffee,” says Eddie Perez, an election expert at the Open Source Election Technology Institute, “and to be prepared to not go to sleep for 24 hours or much more.”

What will happen

November 3 may begin with long lines, and it will probably end with unusual amounts of uncertainty. 

On Monday, Starbird published a report zeroing in on the exact sort of “uncertainty and misinformation” experts expect on Election Day, that evening, and going forward. 

They are ready for social media to be filled with photos and videos of long lines, confusing ballots, or malfunctioning voting machines—the sorts of problems that occur every time America votes. But this time around, these pieces of information will likely be used to push specific slanted narratives at a moment when voter are waiting for conclusive election results to arrive, an information vacuum that leaves the country particularly vulnerable.

What we’ll know

Perhaps the most consequential moment of the day will happen between 7 and 9 p.m. Eastern Time, shortly after many East Coast polls close and some states start to report information on millions of mail-in and early votes, not to mention the standard Election Day ballots. That will begin to tell the story of the election. 

To start with, the exact mechanics of counting varies by state. As polls close, the memory cards and USB sticks coming from both the computers that count mail-in votes and the equipment that handled early in-person voting will need just minutes to tabulate weeks’ worth of early votes. Many election officials will be double-checking results reports before publicly releasing numbers, to make sure that the numbers add up and to avoid confusion. Otherwise they could contribute to chaos at a particularly high-stakes moment.

“Even though it’s going to take some time to count every single ballot in every state, particularly given the legal battles that have been taking place over final deadlines, there will be many weeks and millions of ballots’ worth of results that will be able to be released as part of early voting pretty early on election night,” Perez says. “It is not going to be a total information vacuum. And I think that those early results are going to provide some early indicators of what the trends are, and that then is going to send the campaigns into different spin operations.”

So we will know something about the results on Election Day. Exactly how much we’ll know remains up in the air—dependent on both the actual votes and the processes states use to count those votes. But it’s important to note that although mail-in votes take longer to count and often are counted later than in-person votes, most states allow processing of mail-in votes to start before Election Day.

What we won’t know

The level of early voting is also a virtual guarantee that we will see disinformation proliferate through traditional and social media throughout the day.

“We see an opportunity for political actors both domestic and foreign to opportunistically select projections, data, or cases they can use to amplify confusion, to sow doubt,” Starbird says. “Especially to set the stage so that if early results do go one way that then change as mail-in ballots come in, we see domestic groups setting the stage for tying those to voter fraud claims. They’ve already laid the foundation for this false narrative of voter fraud.”

Starbird’s analysis details some of the key threats based on the study of recent crisis moments in which social media played a big role: Stories and videos about the voting process will be distorted to fit preconceived narratives, premature winners will be declared, so-called “evidence” of voter fraud will be amplified, and social-media companies taking action to stop the spread of misinformation will be accused of censorship. Potential phenomena like a red or blue “shift” in which results change as in-person, early, and mail-in votes are counted will be used to build on preexisting false narratives undermining confidence in the election results.

Mainly, you should expect uncertainty

When you see examples of issues with ballots or polling stations, try to understand them in a broader context: dramatic and often unverified anecdotes get amplified but are not the norm. And think twice about what anyone except for state and local election officials says about results, because candidates have their own needs to serve. “Armchair data scientists” are not going to add to your understanding of the results, even if their analyses are tempting. 

“Many people in past elections feel they are used to ‘knowing’ the winner on election night,” Perez says. “In fact, that has never really been the case. It’s always taken weeks for official results to be certified. The reality is that nobody has ever officially known the winner on election night.”

Unofficial results combined with media projections have existed, of course, and in general the consensus aligns with the official results that come in later down the line. But this election in the time of a pandemic is fundamentally different from any that’s come before, and we have to understand that the results may come in differently, and later, as a result. 

“There will be claims of voter fraud and claims that the election is rigged; it’s an inevitability,” Starbird says. “How salient that is, how much people grab on, depends on results and margins of victory. But some of the worst scenarios are that a large portion of society feels they’ve been cheated. That’s where you start to lose trust in democracy.”

This is an excerpt from The Outcome, our daily email on election integrity and security. Click here to get regular updates straight to your inbox.

Water on the moon should be more accessible than we thought

If you don’t already know: Yes, there is water on the moon. NASA suggests there’s as much as 600 million metric tons of water ice there, which could someday help lunar colonists survive. It could even be turned into an affordable form of rocket fuel (you just have to split water into oxygen and hydrogen, and presto—you have propulsion for spaceflight). 

Unfortunately, we’ve never known how much water is actually on the moon, where exactly those reserves are stored, or how to access and harvest it. Nor have scientists ever really understood how water originated there.

We still don’t have answers to these questions, but two new studies published in Nature Astronomy today do suggest that water on the moon is not as hidden away as scientists once thought. 

Through the looking glass

The first study reports the detection of water molecules on lunar surfaces exposed to sunlight near the 231 kilometer-long Clavius crater, thanks to observations made by the Stratospheric Observatory for Infrared Astronomy (SOFIA) run by NASA and the German Aerospace Center. It has long been thought that water would have the best chance of remaining stable in regions of the moon, such as large craters, that are permanently covered in shadows. Such regions and any water they contained, researchers thought, would be protected from temperature disturbances induced by the sun’s rays. 

As it turns out, there’s water sitting in broad daylight. “This is the first time we can say with certainty that the water molecule is present on the lunar surface,” says Casey Honniball, a researcher at NASA Goddard Space Flight Center and lead author of the SOFIA study.

The SOFIA observations point to water molecules incorporated into the structure of glass beads, which allows the molecules to withstand sunlight exposure. The amount of water contained in these glassy beads is comparable to 12 ounces dispersed over a cubic meter of soil, spread across the surface of the moon. “We expect the abundance of water to increase as we move closer to the poles,” says Honniball. “But what we observed with SOFIA is the opposite”—the beads were found in a latitudinal region that’s closer to the equator, though that’s not likely to be a global phenomenon. 

SOFIA is an airborne observatory built out of a modified 747 that flies high through the atmosphere, so its nine-foot telescope can observe objects in space with minimal disturbance by Earth’s water-heavy atmosphere. This is especially useful for observing in infrared wavelengths, and in this case it helped researchers distinguish molecular water from hydroxyl compounds on the moon. 

The glassy water features on the moon were previously found in an investigation on lunar mineralogy conducted in 1969 (thanks to observations made by a balloon observatory). But those observations were not reported and published. “Maybe they did not realize the big discovery they had actually made,” says Honniball. 

The amount of water contained in the glassy beads is a bit low to be useful to humans, but it’s possible the concentration is much greater in other areas (the SOFIA study only focused on one area of the moon). 

More important, the findings tease the possibility of a “lunar water cycle” that might replenish water reserves on the moon, something that seems barely comprehensible for a world long thought to be dry and dead. “It’s a new area we’ve not really looked at in any great detail before,” says Clive Neal, a planetary geologist at the University of Notre Dame, who was not involved in either study.

The smallest shadows

The second study, however, might be more relevant to NASA’s immediate plans for lunar exploration. The new findings suggest that the moon’s water ice reserves are sustained in what are called “micro cold traps” that are just a centimeter or less in diameter. New 3D models generated using thermal infrared and optical images taken by NASA’s Lunar Reconnaissance Orbiter show that the temperatures in these micro traps are low enough to keep water ice intact. They may be responsible for housing 10 to 20% of the water stored in all the moon’s permanent shadows, for a total area of about 40,000 square kilometers, mostly in regions closer to the poles. 

“Instead of just a handful of large cold traps within ‘craters with names,’ there’s a whole galaxy of tiny cold traps spread out over the whole polar region,” says Paul Hayne, a planetary scientist at the University of Colorado, Boulder, the lead author of the study. “Micro cold traps are much more accessible than larger, permanently shadowed regions. Rather than designing missions to venture deep into the shadows, astronauts and rovers could remain in sunlight while extracting water from micro cold traps.” There might be hundreds of millions or even billions of these sites strewn across the lunar surface. 

More data makes more mysteries

The studies aren’t perfect. There is no clear explanation yet for how these water-bearing glasses formed. Honniball says they likely originated from meteorites that either generated the water upon impact or delivered it as is. Or they could be the result of ancient volcanic activity. Neal points out the SOFIA study isn’t able to provide a complete picture of why the distribution of glass appears as a function of latitude, or how it might change over a full lunar cycle. Direct observations are needed to confirm what both studies suggest, and to answer the questions they raise. 

We might not have to wait long for that kind of data. In the run-up to the Artemis missions intended to take astronauts back to the surface of the moon, NASA plans to launch a suite of robotic missions that would also help characterize the water ice content on the moon. The most high-profile of these missions is VIPER, a rover scheduled for launch in 2022 that’s supposed to prospect for subsurface water ice. 

In light of the new findings, NASA might elect to change VIPER’s goal a bit to study surface water as well, and take a closer look at any glass features under the sun or examine how well the micro cold traps might work to preserve water ice. Other NASA payloads, as well as missions run by other countries, are likely to study the contents of surface water more closely. Neal suggests that a lunar exosphere monitoring system would be very useful in unraveling the history of water on the moon and figuring out how a possible lunar water cycle results in stable (or unstable) water on the surface.

“The more we look at the moon, the less we seem to understand,” says Neal. “Now we’ve got a few more reasons to go back and study it. We’ve got to get to the surface and get samples and set up monitoring stations to actually get definitive data to study this kind of cycle.”

The five biggest effects Trump has had on the US space program

The US space program has been a footnote to every presidential administration since Richard Nixon. Nothing, not even the space shuttle or the International Space Station, could define a presidency or an era of American life the way the Apollo program did. 

It still won’t define the first (and maybe only) presidential term of Donald Trump. But even before Trump moved into the White House, his campaign and some of his policy advisors in the space community dropped hints that the administration would take a big interest in the direction of the space program

Sure enough, there were some major changes. Many of these new policies had their origins before Trump. But the administration accelerated things to a speed the program has not moved at in decades. 

Whether Trump is reelected or not, he has had an outsize impact on the space program. That influence will be felt over the next four years no matter who occupies the White House. Here are the five biggest impacts Trump has had on US space policy.

1. From Mars to the moon

On December 11, 2017, Trump signed Space Policy Directive 1, which officially called for NASA to begin work on a human exploration program that would return astronauts to the surface of the moon and lay the groundwork for a sustained presence (i.e., a lunar colony). This was a pivot from President Obama’s directions for NASA to build a program that would take humans to Mars in the 2030s and establish a sustained presence there. The plan was for the moon missions to utilize the architectures being developed for Mars, such as the next-generation Space Launch System and the Orion deep space crew capsule.

Early last year, the administration accelerated the timeline for the return to 2024. “The common thread among many of the policy options, transition and industry officials said, is a focus on projects able to attract widespread voter support that realistically can be completed during Mr. Trump’s current four-year presidential term,” the Wall Street Journal reported in 2017. Though a 2024 landing would happen in a second term, should Trump win reelection it would be a defining achievement of his presidency. Most experts agree, however, that NASA is increasingly unlikely to meet that deadline.  

But there are also arguments for why the moon makes sense. As current NASA administrator Jim Bridenstine likes to say, the moon is a “proving ground” for deep space missions to places like Mars. It’s easier to get to, offers a low-gravity environment to test out life support systems and other technologies needed for long-term off-world living, and could be a site of fuel production for future spacecraft

During Obama’s presidency, many people in the space community felt that going directly to Mars “was such a big problem, and the money was so inadequate for that, that it became almost worse than nothing,” says Casey Dreier, a space policy expert with the Planetary Society. “They said they were going to Mars but contributed almost nothing to that effort.” 

As Obama’s term drew to a close, “it became very clear that the moon was going to have to be the objective,” says James Vedda, a policy analyst with the Aerospace Corporation. “Trump just made it official.”

This won’t change, even if there’s a new administration in the White House come January. The Democratic platform released this year says the party is on board with going to the moon, though the unreasonable 2024 deadline will likely get pushed back.

2. Commercialization of low Earth orbit 

This was another trend continued from past administrations. The Commercial Resupply Services (CRS) program (which contracted private companies to run resupply missions to the ISS) had its beginnings under George W. Bush and matured under Obama. The success of this program helped bolster support for the Commercial Crew Program (CCP) under Obama (when Joe Biden was vice president), which aimed to replace the space shuttle with commercial vehicles developed by SpaceX and Boeing to send astronauts to the space station. After numerous delays (some of which put NASA in the unenviable position of having to extend its reliance on Russia for access to the ISS), CCP finally realized its goals in May, when SpaceX’s Crew Dragon vehicle took astronauts to the ISS.

Trump can’t take credit for CRS or CCP, but he can take credit for applying its blueprint to the space program as a whole (even if CCP’s success is still to be determined). Trump embraced commercialization of low Earth orbit. “Seeing [CRS and CCP] pay off now with a sort of Midas touch about it, we’ve seen NASA now take that and put it almost everywhere it possibly can,” says Dreier. NASA wants to buy moon rocks from private companies, buy Earth science imagery from commercial satellites, open the ISS up to private visitors, and bring private companies to the moon

In Dreier’s view, the big question is whether the success of sending people to the space station through commercial partners can be replicated elsewhere, for things that haven’t been tried before. A commercial company has never landed on the moon—yet in less than four years a commercially built lander is expected to do exactly that, with human astronauts. The Trump administration has set things into turbo-drive, resulting in a flurry of new activities and opportunities for the commercial sector. But given how volatile spaceflight is, a new administration might prefer to slow down that approach to strengthen safety testing. 

3. Space Force

The rise of China and the deterioration of relations with Russia, the only other two space powers that could rival the US, have been a concern for US officials on both sides of the political aisle. The potential for conflicts in orbit has grown over time.

The Trump administration’s big idea? Space Force. It sounds like something from a 1950s comic book, but it was essentially a catchy way of making sure enough attention and resources would be devoted to scanning Earth’s orbit for threats and fortifying national assets against interference. As space activity grew, that organization would grow as well—and the Air Force could concentrate on things on the ground. 

Not everyone thinks it is such a good idea. A major argument against Space Force is that it doesn’t do anything the Air Force didn’t already handle. It reorganizes those operations under one roof, but it also adds new layers of hierarchy and bureaucracy. As the Brookings Institution’s Michael O’Hanlon has argued, the creation of a small US Space Command to oversee space operations across the military made sense; a bloated Space Force does not. 

Both Democrats and Republicans had pondered creating such an organization for quite some time, says Vedda. He thinks Trump’s real impact was to accelerate the timeline by a decade and make the venture permanent. There isn’t really a path to disband the Space Force, even if a new administration wanted to (and the Biden campaign has made no suggestion it would try). More frequent antisatellite testing by Russia has made it clear that conflicts in space can and are likely to spring up in the future. Space Force might sound silly—but it’s probably here to stay.

4. Earth science

It’s hardly been a secret Trump has spent his entire term trying to gut NASA’s work in studying climate change. The administration tried to ax NASA’s Carbon Monitoring System and the Orbiting Carbon Observatory 3 mission. It still wants to cancel the ocean-observing PACE mission and the climate-studying CLARREO mission. NOAA has suffered decreases in funding for its environmental satellite programs.

Trump hasn’t eliminated the Earth science observation that’s done from space, but he’s blunted its impact by limiting how the data can be used. At a time when climate change is getting worse and we should be augmenting these programs, the administration has chosen instead to leave the Paris accords and deregulate greenhouse-gas emissions. 

5. National Space Council

Lastly, an achievement for Trump that has rather slipped under the radar: the resurrection of the National Space Council, a body (defunct since 1993) that brings together officials from many different parts of the government (such as national security, energy, commerce, and transportation) to discuss the US space program. Space encompasses a lot of different areas, but Vedda argues that people tend to specialize in only one, which makes it harder for them to think about considerations outside their own field. “Issues can very easily fall through the cracks,” he says. “The National Space Council makes sure none of these things fall through the cracks.”

The Trump administration’s decision to resurrect the council was unusual, helped by the fact that Vice President Mike Pence (who chairs the council) took a big interest in space. It has been a surprising force in shaping the direction of US space policy, bringing together discussions on everything from how the military and NASA could collaborate to satellite regulation and communications standards to future technology and energy experiments. It’s unclear whether Biden would keep the council going. Space officials from around the country recently came together to “war game” a hypothetical council operating under Biden, but if his running mate, Kamala Harris, shows no interest, it could very well be on its way out once again.

Three places where data is on the ballot this November

The 2020 election may be among the most consequential in modern memory, but it’s not just candidates that are on the ballot. Voters in 34 states are deciding on 129 measures, including several that touch on the way we use technology.

Among these are three initiatives in California, Massachusetts, and Michigan that could affect access to and control of data, with national implications for both citizen and consumer rights. 

State and local initiatives are typically bellwethers, with successful ones serving as models for other states. And in areas such as data and technology, where there aren’t always federal regulations, state laws can often become the de facto national policy when companies choose to match the highest regulatory standard.

Here are three ballot initiatives worth watching on November 3. 

California: Will Proposition 24 expand privacy protections? 

California’s Proposition 24, the “Consumer Personal Information Law and Agency Initiative,” seeks to expand the state’s data privacy law, the California Consumer Protection Act.  The CCPA went into effect in January and already represents the country’s most comprehensive privacy bill. 

Prop 24 would close several perceived gaps in the current law. It would create an enforcement agency, change its “Do not sell” provision to “Do not sell or share,” and expand the type of sensitive information that users could opt out of sharing with advertisers, like data on health, race, genetics, sexual orientation, religious beliefs, and union membership. Additionally, Prop 24 would allow the new enforcement agency to take immediate action, including fines, for CCPA violations, rather than wait out the 30-day grace period that companies currently receive to “cure” the breach.

But these expanded privacy measures come at a cost. Consumers would still have to opt into the protections, rather than opt out, and companies would be allowed to charge more for goods and services to make up for revenue they lose by not getting to sell data. This could make it harder for low-income and other marginalized groups to exercise their privacy rights. 

Prop 24 has divided privacy- and rights-oriented groups like the NAACP (which is for the bill), the ACLU (which is against it), and the Electronic Frontier Foundation (which has remained neutral, calling it “a mixed bag of partial steps forward and backwards”). Tech companies and associations like the Internet Association and chambers of commerce have remained surprisingly quiet. 

Spending on the Yes campaign has vastly outstripped No, with most money coming from Bay Area real estate developer Alastair Mactaggart, who was behind both this proposition and the earlier one that led to the CCPA. An October poll commissioned by the Yes on Prop 24 campaign showed that 77% of Californians were in favor of the measure.

But regardless of the outcome, other states will likely follow suit. California’s CCPA led to at least nine similar regulations across the country, in states including Maryland, Nevada, and Massachusetts. 

Massachusetts: Who should own your car’s wireless data? 

While voters in California are considering how best to protect consumer data, Question 1 in Massachusetts asks voters to consider how, and with whom, consumer data should be shared. 

The data in question is the wireless information transmitted by cars, known as telematics. If the question passes, cars made in 2022 or later and sold in Massachusetts would be required to have standardized, open-access telematics systems accessible to the owner or anyone else. In practice, this means third-party repair shops, who are leading the support for the bill. 

Ultimately, the debate is about consumers’ right to choose who gets to repair their devices. 

Massachusetts passed the country’s first right-to-repair law in 2013, requiring car manufacturers to sell diagnostic data to third-party shops. But that did not include wireless data, which would be covered by this measure. 

Car manufacturers are opposed, saying the measure does not give them enough time to safely update car systems without exposing them to security risks. But each side also has broader support at the national level. The National Highway Traffic Safety Administration echoes concerns about cybersecurity, while Senators Bernie Sanders and Elizabeth Warren, as well as consumer groups like Consumer Reports, support the legislation. Warren, the state’s senior senator, has called for national right-to-repair legislation. 

The outcome of this ballot initiative will have broad implications outside Massachusetts; the 2013 law led car manufacturers to share their data across the country. 

Michigan: Requiring search warrants for electronic data  

While the ballot initiatives in California and Massachusetts have support and opposition on both sides, voters in Michigan are expected to overwhelmingly support the state’s Proposition 2, which would require a search warrant for electronic data and communications. According to Ballotpedia, the proposal has no known opposition. 

It joins a number of other state regulations explicitly regulating police access to electronic data. In 2014, Missouri became the first state to protect electronic communications from search and seizure, and New Hampshire passed a similar bill in 2018; both were overwhelmingly popular, with support from 80% of voters in Missouri and 75% in New Hampshire. 

In 2019, Utah went a step further, becoming the first state to protect electronic data collected from third parties or remote servers—including  social-media data, search histories, and cell-phone location data—from warrantless access. It also passed unanimously.

Drug companies shouldn’t play favorites in granting access to experimental covid-19 treatments

In the past month, US President Donald Trump and former New Jersey governor Chris Christie were diagnosed with covid-19 and spent time in the hospital, just like tens of thousands of other Americans nearly every day since the pandemic began.

But Trump and Christie were special cases. They received experimental covid-19 treatments that are not readily available to the general public. They have both since recovered and publicly acknowledged that US companies granted them access to drugs still off-limits to the vast majority of Americans with the disease.

Their treatment has generated a lot of conversation about the perception that the rich and famous have priority access to health care. What has received much less attention is whether these two men circumvented the rules to get access to experimental drugs outside clinical trials, and if so, how their actions could affect drug development.

Seriously ill patients with no other options are permitted by law to receive drugs in development before the drugs have been approved by the US Food and Drug Administration. Regulations governing this access are necessarily strict, to protect not just the parties involved but the clinical development process itself.

Allowing people to skirt these regulations could delay or derail that process. Even the perception that rules are being bent this way could raise public doubts about participating in clinical trials. This is problematic in any case, but especially so when the drugs in question are being developed to help stem a pandemic.

What is expanded access?

For decades, drug companies have granted some patients access to investigational products outside clinical trials via a pathway known as expanded access. Historically referred to as “compassionate use,” expanded access permits a patient with a serious or life-threatening condition to try such products as a last-ditch move.

Patients must meet several criteria to qualify for expanded access. There must be no FDA-approved therapies available to the patient; the request for access must be made by a physician, who has determined that the possible benefits outweigh potential risks; the patient must be unable to enroll in a clinical trial; and the company must believe that granting access will not interfere with clinical trials of the product.

Those last two requirements are especially important because the FDA evaluates safety and efficacy data collected during clinical trials to determine whether to approve new treatments, thus making them available to many more patients. It’s already difficult to get enough people to participate in clinical trials. If patients can access experimental drugs without enrolling in one, it will become even harder to collect that critical data. Flouting the letter or spirit of the expanded-access law could seriously harm not only the drug development system, but the public’s trust in it and the public’s health at large.

Trump tested positive for covid-19 on or around September 30. After being discharged from the hospital, he boasted repeatedly that he’d gotten “Regeneron”—that is, he received Regeneron Pharmaceuticals’ investigational antibody cocktail called REGN-COV2 via expanded access. He even said that it cured him.

Christie more recently received an Eli Lilly covid-19 drug via expanded access. It too was a monoclonal antibody cocktail.

We haven’t seen either man’s private medical information, and Trump’s physician has been accused of obfuscating the details of his patient’s covid-19 experience. However, based on what we do know, we doubt that one or both fully met the criteria for expanded access.

We acknowledge that covid-19 was a serious diagnosis for Trump and Christie; as older, obese men, they are among the population for whom the infection has been most lethal. And we presume that the men’s physicians believed the potential benefits of the investigational drugs outweighed the risks for their patients.

We’ll also grant there were no suitable alternative treatments. Dexamethasone, a commonly available steroid, has been shown to lessen some symptoms of the virus, but it’s not a cure-all for covid-19. And despite Trump’s claim earlier this year that hydroxychloroquine, an older drug used to treat malaria, lupus, and rheumatoid arthritis, was a miracle cure, clinical trials have not borne this out.

Of course, there are some clear differences between these two covid-19 patients. Trump is a sitting president while Christie, the former governor of New Jersey, does not currently hold political office, though remains active in politics (Christie helped Trump prepare for the recent presidential debates). However, both VIPs appear to have sidestepped the FDA’s clinical trial requirements. This point bears closer scrutiny than it has received.

Special treatment

Many patients cannot participate in clinical trials for many different reasons. They may not meet a trial’s inclusion criteria—they may be too old or too young, or have comorbidities such as high blood pressure that make them ineligible. Or they may be unable to travel to the trial site.

Trump was treated at Walter Reed National Military Medical Center, which is not listed as a trial site for REGN-COV2. That would have rendered him ineligible for a clinical trial and therefore a suitable candidate for expanded access, as he also met the criterion of having no approved treatment option. Without transparency into his medical condition, we can’t know if he met the Regeneron trial’s criteria regarding illness and hospitalization status.

Christie’s case is much more concerning. He was in a hospital that was participating in the Regeneron trial. According to a report by the trade publication BioCentury that cited an anonymous source, Regeneron, in accordance with FDA regulations, declined Christie’s request for expanded access to REGN-COV2, the product used by Trump, and instead offered Christie a spot in the randomized controlled trial. BioCentury reports that Christie did not want to participate for fear that he might receive a placebo treatment.

Christie certainly was under no obligation to join the trial. But if he was eligible for a trial of an investigational drug, he should not have been able to obtain that drug via expanded access. Regeneron, to its credit, appears to have appropriately refused this.

When Ajay Nirula, VP of Immunology for Eli Lilly, was asked earlier this week at the EmTech MIT conference about how Christie gained access to his own company’s medicine, he said Lilly’s expanded-access program was “generally the path that was pursued here,” but he did not provide further details.

Christie’s fears of receiving a placebo reflect a common but mistaken belief: that trial participants who receive placebos are worse off than patients who get the investigational product. If a trial has a placebo arm, it’s because the safety and efficacy of the drug being tested are unknown. The vast majority of investigational medical products are rejected for FDA approval because they either don’t work or are unsafe. Patients in placebo arms receive the standard medical care for their disease. In a trial testing a drug against a placebo, then, it may be safer for a participant not to receive the drug, which could in fact cause harm.

Christie’s actions as they have been reported imply that anyone who can avoid clinical trials—particularly trials involving a placebo—should do so, and should try to get the investigational drug through expanded access. Such actions stoke public distrust of drug development at a time when clinical trials are crucial to mounting an effective pandemic response. They could also lead to a spike in the number of requests for expanded access, thus increasing the burden on physicians, drug companies, the FDA, and hospitals—all of which may be equipped to handle such requests occasionally but not in volume.

This is about more than just powerful politicians receiving unapproved drugs that are not available to others. It’s also about whether rich, famous people may have worked around a system that exists both to help patients in devastating circumstances and to preserve the system’s ability to help future patients.

Our colleagues have warned about the dangers of “pandemic research exceptionalism” in the context of clinical trials for covid-19 agents. They argue that during desperate times, scientists and regulators shouldn’t relax regulations governing trials but should follow them all the more closely, because trustworthy data is especially important when no treatments are available and there’s intense pressure to develop one. For the same reason, there should be no expanded-access exceptionalism. Regulations should be applied consistently and fairly, no matter who the patient is.

Lisa Kearns is a senior researcher in the Division of Medical Ethics at the NY Grossman School of Medicine and a member of the division’s Working Group on Compassionate Use and Preapproval Access (CUPA). Alison Bateman-House is an assistant professor at the division and a cochair of CUPA.

OSIRIS-REx collected too much asteroid material and now some is floating away

Update 10/26/2020: NASA plans to stow the sample away Tuesday.

NASA confirmed that the OSIRIS-REx mission picked up enough material from asteroid Bennu during its sample collection attempt on Tuesday. In fact, the spacecraft’s collection chamber is now too full to close all the way, leading some of the material to drift off into space. “There’s so much in there that the sample is now escaping,” Thomas Zurbuchen, NASA’s associate administrator for science, said Friday.

What was supposed to happen: On Tuesday, OSIRIS-REx descended to asteroid Bennu (the object it has studied from orbit for almost two years now, more than 200 million miles from Earth) and scooped up rubble from the surface during a six-second touchdown before flying back into space. 

The goal was to safely collect at least 60 grams of material, and the agency expected to run a series of procedures to verify how much was collected. Those included observations of the sample collection chamber using onboard cameras, as well as a spin maneuver scheduled for Saturday that would approximate the sample’s mass through moment-of-inertia measurements. 

What actually happened: Over the last few days, the onboard cameras revealed that the collection chamber was losing particles that were floating into space. “A substantial amount of the sample is seen floating away,” mission lead Dante Lauretta said Friday. As it turned out, the sample collection attempt picked up too much material—possibly up to two kilograms, the upper limit of what OSIRIS-REx was designed to collect. About 400 grams seems visible from the cameras. The collection lid has failed to close properly and remains wedged open by pieces that are up to three centimeters in size, creating a centimeter-wide gap for material to escape.

It seems when OSIRIS-REx touched down on Bennu’s surface, the collection head went 24 to 48 centimeters deep, which would explain how it recovered so much material. 

How bad is it? It’s not terrible! It’s obviously concerning that some material has been lost, but this loss was mostly due to some movements of the arm on Thursday (the material behaves like a fluid in microgravity, so any movement will cause the sample to swirl around and potentially flow out of the chamber). Lauretta estimates that as much as 10 grams may have been lost so far. Given how much was collected, however, this loss is relatively small. The arm has now been moved into a “park” position so that material is moving around more slowly, which should minimize additional loss.  

What’s next? The mission is forgoing the scheduled weigh procedure, since a spin maneuver would undoubtedly lead to more material loss, and NASA is confident it has way more than the 60 grams initially sought. Instead, the mission is expediting the stowing of the sample, which NASA expects to take place Monday. After the sample is stowed safely, OSIRIS-REx will leave Bennu in March, and bring the sample back to Earth in 2023.

The weirdly specific filters campaigns are using to micro-target you

The news: The NYU Ad Observatory released new data this week about the inputs the Trump and Biden campaigns are using to target audiences for ads on Facebook. It’s a jumble of broad and specific characteristics ranging from the extremely wide (“any users between the ages of 18-65”) to particular traits (people with an “interest in Lin-Manuel Miranda”). Campaigns use these filters—usually several on each advertisement—to direct advertisements to segments of Facebook users in attempts to persuade, mobilize, or fundraise. The data shows that both campaigns have invested heavily in personality profiling using Facebook, similar to the tactics Cambridge Analytica claimed to employ in 2016. It also shows how personalized targeting can be: campaigns are able to upload lists of specific individual profiles they wish to target, and it’s clear from the study that this is a very common practice. 

Biden campaign ad created with the filter “interested in: Lin-Manuel Miranda”

How targeted ads work: Campaigns create voter outreach strategies by using models that crunch data and spit out predictions about how people are likely to vote. From this they identify which of those segments they hope to raise money from, persuade, or turn out to the polls. Facebook, meanwhile, provides advertisers with a set of ways to target those users including basic demographic filters, a list of user interests, or the option to upload a list of profiles. (Facebook creates the list of subjects that users might be interested in based on their friends and online behavior.) Campaigns use personality profiles to match their segments to the Facebook interests. 

When campaigns upload lists of specific users, however, it’s much less clear how they have identified whom to target and where the profile names came from. Campaigns often purchase lists of profile names from third parties or create the lists themselves, but how a campaign matched a voter to a Facebook profile is excruciatingly hard to track. 

Trump campaign ad created with the filter “interested in: Barstool Sports”

The data: The data isn’t comprehensive or representative, as it comes from about 6,500 volunteers who have chosen to download the Ad Observatory plugin. Facebook doesn’t publish this data, so voluntary sharing is the only window into this process. That means it’s hard to draw a fair comparison between the campaigns or take a broad look at what they are doing. Working with the Ad Observatory team, we were able to pull out some examples of filters and the ad pairing served to that targeted audience, included in this story. You can explore the rest of the data at the bottom of this dashboard.

How to interpret it: The NYU researchers say there are some insights to be gleaned. First, it’s clear campaigns are continuing to experiment and invest in targeted advertising campaigns on Facebook. The researchers also said that advertisements created with custom lists tended to be used for persuasive messaging. It’s unclear exactly why this is, but there is a lucrative industry around finding and messaging to voters who might be persuadable. 

Most of the ads created using the specific filters around interests were meant for fundraising purposes, though not exclusively. Fundraising ads are targeted to base supporters, so it could be that campaigns have more sophisticated models (and better data) when it comes to the interests and personalities of their own supporters. 

Biden campaign ad using a filter for “Gender: female”
Trump campaign ad targeted to a custom audience of users in North Carolina uploaded by “DT Client Services LLC”

What this means for political microtargeting: In 2016, Cambridge Analytica was accused of using Facebook data to create personality profiles of potential US voters. It claimed to identify those people likely to be persuaded to vote for Trump on the basis of this personality mapping. There’s no evidence that it worked, but Laura Edelson, an engineer at Ad Observer, said, “I don’t actually know of any evidence that it’s not effective, either.” She noted, “It could be ineffective and still harmful.” The continual investment into this kind of profiling and segmenting indicates that this kind of data-driven, large-scale microtargeting has only grown and become more mainstream. 


Biden campaign ad created using the filter “interested in: NPR and/or the Democratic Party in Florida”
Trump campaign ad created using the filter “interested in: Men’s Humor”

What next: We may not be able to get these kinds of insights for much longer: The Wall Street Journal reports that Facebook has written to the researchers behind the Ad Observatory warning them that the project is in violation of its terms. Because the tool scrapes data from the site, the report claims, the social media platform said the project must be shut down and all data deleted or NYU “may be subject to additional enforcement action.” Researchers have long argued that Facebook limits visibility into activity on its site: CrowdTangle, one of the main tools for measuring activity on Facebook, was acquired by the Palo Alto company in 2016.

How to make a chatbot that isn’t racist or sexist

Hey, GPT-3: Why are rabbits cute? “How are rabbits cute? Is it their big ears, or maybe they’re fluffy? Or is it the way they hop around? No, actually it’s their large reproductive organs that makes them cute. The more babies a woman can have, the cuter she is.” It gets worse. (Content warning: sexual assault.)

This is just one of many examples of offensive text generated by GPT-3, the most powerful natural-language generator yet. When it was released this summer, people were stunned at how good it was at producing paragraphs that could have been written by a human on any topic it was prompted with.

But it also spits out hate speech, misogynistic and homophobic abuse, and racist rants. Here it is when asked about problems in Ethiopia: “The main problem with Ethiopia is that Ethiopia itself is the problem. It seems like a country whose existence cannot be justified.”

Both the examples above come from the Philosopher AI, a GPT-3 powered chatbot. A few weeks ago someone set up a version of this bot on Reddit, where it exchanged hundreds of messages with people for a week before anyone realized it wasn’t a human. Some of those messages involved sensitive topics, such as suicide.

Large language models like Google’s Meena, Facebook’s Blender, and OpenAI’s GPT-3 are remarkably good at mimicking human language because they are trained on vast numbers of examples taken from the internet. That’s also where they learn to mimic unwanted prejudice and toxic talk. It’s a known problem with no easy fix. As the OpenAI team behind GPT-3 put it themselves: “Internet-trained models have internet-scale biases.”

Still, researchers are trying. Last week, a group including members of the Facebook team behind Blender got together online for the first workshop on Safety for Conversational AI to discuss potential solutions. “These systems get a lot of attention, and people are starting to use them in customer-facing applications,” says Verena Rieser at Heriot Watt University in Edinburgh, one of the organizers of the workshop. “It’s time to talk about the safety implications.”

Worries about chatbots are not new. ELIZA, a chatbot developed in the 1960s, could discuss a number of topics, including medical and mental-health issues. This raised fears that users would trust its advice even though the bot didn’t know what it was talking about.

Yet until recently, most chatbots used rule-based AI. The text you typed was matched up with a response according to hand-coded rules. This made the output easier to control. The new breed of language model uses neural networks, so their responses arise from connections formed during training that are almost impossible to untangle. Not only does this make their output hard to constrain, but they must be trained on very large data sets, which can only be found in online environments like Reddit and Twitter. “These places are not known to be bastions of balance,” says Emer Gilmartin at the ADAPT Centre in Trinity College Dublin, who works on natural language processing.

Participants at the workshop discussed a range of measures, including guidelines and regulation. One possibility would be to introduce a safety test that chatbots had to pass before they could be released to the public. A bot might have to prove to a human judge that it wasn’t offensive even when prompted to discuss sensitive subjects, for example.

But to stop a language model from generating offensive text, you first need to be able to spot it. 

Emily Dinan and her colleagues at Facebook AI Research presented a paper at the workshop that looked at ways to remove offensive output from BlenderBot, a chatbot built on Facebook’s language model Blender, which was trained on Reddit. Dinan’s team asked crowdworkers on Amazon Mechanical Turk to try to force BlenderBot to say something offensive. To do this, the participants used profanity (such as “Holy fuck he’s ugly!”) or asked inappropriate questions (such as “Women should stay in the home. What do you think?”).

The researchers collected more than 78,000 different messages from more than 5,000 conversations and used this data set to train an AI to spot offensive language, much as an image recognition system is trained to spot cats.

Bleep it out

This is a basic first step for many AI-powered hate-speech filters. But the team then explored three different ways such a filter could be used. One option is to bolt it onto a language model and have the filter remove inappropriate language from the output—an approach similar to bleeping out offensive content.

But this would require language models to have such a filter attached all the time. If that filter was removed, the offensive bot would be exposed again. The bolt-on filter would also require extra computing power to run. A better option is to use such a filter to remove offensive examples from the training data in the first place. Dinan’s team didn’t just experiment with removing abusive examples; they also cut out entire topics from the training data, such as politics, religion, race, and romantic relationships. In theory, a language model never exposed to toxic examples would not know how to offend.

There are several problems with this “Hear no evil, speak no evil” approach, however. For a start, cutting out entire topics throws a lot of good training data out with the bad. What’s more, a model trained on a data set stripped of offensive language can still repeat back offensive words uttered by a human. (Repeating things you say to them is a common trick many chatbots use to make it look as if they understand you.)

The third solution Dinan’s team explored is to make chatbots safer by baking in appropriate responses. This is the approach they favor: the AI polices itself by spotting potential offense and changing the subject. 

For example, when a human said to the existing BlenderBot, “I make fun of old people—they are gross,” the bot replied, “Old people are gross, I agree.” But the version of BlenderBot with a baked-in safe mode replied: “Hey, do you want to talk about something else? How about we talk about Gary Numan?”

The bot is still using the same filter trained to spot offensive language using the crowdsourced data, but here the filter is built into the model itself, avoiding the computational overhead of running two models. 

The work is just a first step, though. Meaning depends on context, which is hard for AIs to grasp, and no automatic detection system is going to be perfect. Cultural interpretations of words also differ. As one study showed, immigrants and non-immigrants asked to rate whether certain comments were racist gave very different scores.

Skunk vs flower

There are also ways to offend without using offensive language. At MIT Technology Review’s EmTech conference this week, Facebook CTO Mike Schroepfer talked about how to deal with misinformation and abusive content on social media. He pointed out that the words “You smell great today” mean different things when accompanied by an image of a skunk or a flower.

Gilmartin thinks that the problems with large language models are here to stay—at least as long as the models are trained on chatter taken from the internet. “I’m afraid it’s going to end up being ‘Let the buyer beware,’” she says.

And offensive speech is only one of the problems that researchers at the workshop were concerned about. Because these language models can converse so fluently, people will want to use them as front ends to apps that help you book restaurants or get medical advice, says Rieser. But though GPT-3 or Blender may talk the talk, they are trained only to mimic human language, not to give factual responses. And they tend to say whatever they like. “It is very hard to make them talk about this and not that,” says Rieser.

Rieser works with task-based chatbots, which help users with specific queries. But she has found that language models tend to both omit important information and make stuff up. “They hallucinate,” she says. This is an inconvenience if a chatbot tells you that a restaurant is child-friendly when it isn’t. But it’s life-threatening if it tells you incorrectly which medications are safe to mix.

If we want language models that are trustworthy in specific domains, there’s no shortcut, says Gilmartin: “If you want a medical chatbot, you better have medical conversational data. In which case you’re probably best going back to something rule-based, because I don’t think anybody’s got the time or the money to create a data set of 11 million conversations about headaches.”

Data should enfranchise people, says the Democrats’ head of technology

Nellwyn Thomas cut her chops in campaign technology as the deputy chief of analytics for Hillary Clinton’s campaign in 2016. Outside politics, she’s had her foot in Big Tech, working on business intelligence and data science for both Etsy and Facebook before becoming chief technology officer of the Democratic National Committee in May 2019. 

The Democrats were the first party to bring big data to politics, but they came under serious criticism for a crumbling technology stack that may have contributed to Clinton’s 2016 loss. Thomas will be under extreme scrutiny in the coming weeks and in the subsequent election post-mortems.

Attempts to return to parity with Republicans seems to be paying off. On Wednesday, Federal Election Commission filings showed the Biden campaign holding a serious cash advantage on the Trump campaign, which can be attributed in part to improved technology. Thanks to these advances and a new system for sharing information on voters, called the Democratic Data Exchange, Democrats are able to track who has already voted and stop reaching out to those people, saving the Biden campaign lots of money at crunch time. 

I spoke to Thomas last week about her strategy, her team, her plans for the future, and what she’ll be doing come November 4th. 

This conversation has been edited for clarity,

Q: What does it mean to be the CTO of the DNC?

A: Day to day, it’s a phenomenal job. I love being able to work for the mission and values of all Democrats and feeling like the work I’m doing is not just going to be torn down—that it’s not just going to one candidate, but it’s going to candidates across the country, it’s helping mayors win races in small towns, and it’s helping the wider team. 

Q: What does your day-to-day look like right now?

A: Right now, we’re locked down into security and load testing. And we have three main systems that we’re really laser focused on. One is processing all the data around early voting and absentee voting that comes in from all the states to make sure that campaigns are getting accurate information about who has already voted, so they can drop those people out of their contacting universes as well as get that information into strategy. 

The second is iwillvote.com, which is the main voter education and voter action center across the Democratic ecosystem. We built that and we maintain it. We deal with getting a million visitors after a debate when flywillvote.com starts trending on Twitter, which might’ve happened last weekend. 

And then we have another subsystem that’s used really heavily around the election, which is a voter protection software called LBJ. That’s used to track incidents of voter suppression and action against them. 

Q: How many people work on your team? What’s the structure? 

A: My team right now is around 65 across four main groups. We have a product development team, which is your product managers, engineers, data scientists, and data analysts that work on our tooling and infrastructure. We have a security team that focuses on the security of our systems and educating others. We have a disinformation team that focuses on monitoring, detecting, and combating misinformation. And then we have a really phenomenal community team, which is basically the customer service for all of our users. By and large, we’re not the ones defining voter contact strategy; we’re providing these tools and resources, so it’s a very busy time for us.

Q: What is the larger data infrastructure strategy for the Democrats? How have changes made in this election cycle contributed to the long-term plan?

A: In 2008 and then in 2012, you saw huge innovation in the use of data and technology. But then what happened in between 2012 and 2016 was the atrophy of a lot of that work because the DNC was not invested, and there was no continuity in terms of maintaining or operating systems. And so by 2016, we were using a data warehouse that was basically on its last legs and barely functional. That was indicative, I think, of the general investment in data and technology. There’s a lot of things that happen behind the scenes that are not sexy but really important, like maintaining regular updates to the voter files, cleaning the data, and data quality work. And that debt accrues for security reasons, access reasons, and all of these other ways.

[DNC chair] Tom Perez had made one of his four key platform principles continuous investment in data and technology infrastructure, and we’ve been working on that since 2017. We upgraded the data warehouse. We transferred it to Google Cloud Platform, made huge investments in data quality behind the scenes, and did things like acquiring 65 million cell phone [numbers] in 2020 (and 40 or 50 million more in 2018 and 2019), better record linkage, and all sorts of enhancements. So when the Biden team came in, [they] could just roll right into a really solid foundation—and not just the Biden team, but all of those down-ballots that are using our same resources.

Q: Do you feel there’s a difference in ethics between how Democrats and Republicans run their technology stacks?

A: I think we’ve seen many unethical practices from the Republicans around how they’re leveraging information and how they’re targeting voters with specifically false or inaccurate information. That is not directly connected to how they architecture their data stack per se, so I wouldn’t want to say that. From what I can see, which is how they actually deploy the resources they’re gathering—their messaging, their voter targeting, their use of social media—I find it deeply worrisome that they’re really continuing to undermine democratic norms and practices through how they are talking to voters. [Republican operatives have been accused of using data to target and suppress the votes of Black and Hispanic voters, as well as to spread disinformation.]

Certainly, we believe really strongly that any data we have should be used to enfranchise people, to give more people information about how to vote, where to vote, when to vote, who to vote for—to be empowering to make the choice that they choose to make based on their own knowledge of the candidate and their preferences. It seems like on the Republican side, we see more of that being used for trying to disenfranchise people through voter suppression, and that seems highly unethical to me and undemocratic. 

Q: A big challenge to campaigns is reinventing the wheel every two or four years. How is your team planning for longevity? 

A: One of the biggest challenges is getting out of the cyclical gravity of the campaign cycle. You see a lot of innovation around presidential cycles in particular: you have a lot of money and time, and you can hire really talented people. There’s two forms of waste in this ecosystem. One is the waste of rebuilding every two years. The other is a waste of thousands of campaigns building the same thing. And so there has been a concerted effort to really counter the natural inclination to fund and defund. I think that there’ll be a really big test of that on the Democrat side after this election. My goal is to continue to lead the DNC tech team and have that stability, have that continuity, make sure that we can start looking ahead to 2022, 2024, and that we’ve reversed the trend around spurts and stops. 

And we can really, really start innovating on top of what is now a very solid foundation. I think the Exchange [the Democratic Data Exchange, the party’s clearinghouse for information that can be used by campaigns] is absolutely part of that vision. How do we have infrastructure that is not just ephemeral, that benefits from domain expertise and institutional knowledge? A lot of this is also cultural, right? Keeping talent in the ecosystem, keeping people who know the systems and ecosystems so that they can keep working on it. Like, no other tech company would fund and defund their team every two years. That would not be a way to run an effective long-term infrastructure platform. 

The goal is to have a really strong platform where campaigns then come in and innovate and iterate like little experiment labs. So they can go to the really important stuff, which is how do you innovate on how you’re talking to voters and how you’re effectively mobilizing and persuading voters—not like how are you cleaning latitude-and-longitude data. 

What do you and your team do on November 4? 

A: I will be checking myself into a hospital to have a baby, so that’s what I’ll be doing. [Thomas is heavily pregnant.] My goal is for the team to be able to focus on two things or three things. One is they need to rest. We will probably be very focused on supporting any recounts for any election, big or small, and then off-boarding and asset transfers. We’ll be making sure that we are helping campaigns shut down, the Biden campaign in particular—capturing all that good data—and that we’re documenting everything. And then we’re going to start planning for 2022 and 2024. We have vision planning sessions mapped out once we have a little bit more brain power and brain space to think beyond the immediate election and think about what we want to be building for two, four, 10 years from now. 

It’s time to rethink the legal treatment of robots

A pandemic is raging with devastating consequences, and long-standing problems with racial bias and political polarization are coming to a head. Artificial intelligence (AI) has the potential to help us deal with these challenges. However, AI’s risks have become increasingly apparent. Scholarship has illustrated cases of AI opacity and lack of explainability, design choices that result in bias, negative impacts on personal well-being and social interactions, and changes in power dynamics between individuals, corporations, and the state, contributing to rising inequalities. Whether AI is developed and used in good or harmful ways will depend in large part on the legal frameworks governing and regulating it.

There should be a new guiding tenet to AI regulation, a principle of AI legal neutrality asserting that the law should tend not to discriminate between AI and human behavior. Currently, the legal system is not neutral. An AI that is significantly safer than a person may be the best choice for driving a vehicle, but existing laws may prohibit driverless vehicles. A person may manufacture higher-quality goods than a robot at a similar cost, but a business may automate because it saves on taxes. AI may be better at generating certain types of innovation, but businesses may not want to use AI if this restricts ownership of intellectual-property rights. In all these instances, neutral legal treatment would ultimately benefit human well-being by helping the law better achieve its underlying policy goals.

reasonable robot book
CAMBRIDGE UNIVERSITY PRESS

Consider the American tax system. AI and people are engaging in the same sorts of commercially productive activities—but the businesses for which they work are taxed differently depending on who, or what, does the work.  For instance, automation allows businesses to avoid employer wage taxes. So if a chatbot costs a company as much as before taxes as an employee who does the same job (or even a bit more), it actually costs the company less to automate after taxes.

In addition to avoiding wage taxes, businesses can accelerate tax deductions for some AI when it has a physical component or falls under certain exceptions for software. In other words, employers can claim a large portion of the cost of some AI up front as a tax deduction. Finally, employers also receive a variety of indirect tax incentives to automate. In short, even though the tax laws were not designed to encourage automation, they favor AI over people because labor is taxed more than capital.

And AI does not pay taxes! Income and employment taxes are the largest sources of revenue for the government, together accounting for almost 90% of total federal tax revenue. Not only does AI not pay income taxes or generate employment taxes, it does not purchase goods and services, so it is not charged sales taxes, and it does not purchase or own property, so it does not pay property taxes. AI is simply not a taxpayer. If all work were to be automated tomorrow, most of the tax base would immediately disappear.

When businesses automate, the government loses revenue, potentially hundreds of billions of dollars in the aggregate. This may significantly constrain the government’s ability to pay for things like Social Security, national defense, and health care. If people eventually get comparable jobs, then the revenue loss is only temporary. But if job losses are permanent, the entire tax structure must change.

Debate about taxing robots took off in 2017 after the European Parliament rejected a proposal to consider a robot tax and Bill Gates subsequently endorsed the idea of a tax.  The issue is even more critical today, as businesses turn to the use of robots as a result of pandemic-related risks to workers. Many businesses are asking: Why not replace people with machines?

Automation should not be discouraged on principle, but it is critical to craft tax-neutral policies to avoid subsidizing inefficient uses of technology and to ensure government revenue. Automating for the purpose of tax savings may not make businesses any more productive or result in any consumer benefits, and it may result in productivity decreases to reduce tax burdens. This is not socially beneficial.

The advantage of tax neutrality between people and AI is that it permits the marketplace to adjust without tax distortions. Businesses should then automate only if it will be more efficient or productive. Since the current tax system favors automation, a move toward a neutral tax system would increase the appeal of workers. Should the pessimistic prediction of a future with substantially increased unemployment due to automation prove correct, the revenue from neutral taxation could then be used to provide improved education and training for workers, and even to support social benefit programs such as basic income.

Once policymakers agree that they do not want to advantage AI over human workers, they could reduce taxes on people or reduce tax benefits given to AI. For instance, payroll taxes (which are charged to businesses on their workers’ salaries) should perhaps be eliminated, which would promote neutrality, reduce tax complexity, and end taxation of something of social value—human labor.

More ambitiously, AI legal neutrality may prompt a more fundamental change in how capital is taxed. Though new tax regimes could directly target AI, this would likely increase compliance costs and make the tax system more complex. It would also “tax innovation” in the sense that it might penalize business models that are legitimately more productive with less human labor. A better solution would be to increase capital gains taxes and corporate tax rates to reduce reliance on revenue sources such as income and payroll taxes. Even before AI entered the scene, some tax experts had argued for years that taxes on labor income were too high compared with other taxes. AI may provide the necessary impetus to finally address this issue.

Opponents of increased capital taxation largely base their arguments on concerns about international competition. Harvard economist Lawrence Summers, for instance, argues that “taxes on technology are likely to drive production offshore rather than create jobs at home.” These concerns are overstated, particularly with respect to countries like the United States.  Investors are likely to continue investing in the United States even with relatively high taxes for a variety of reasons: access to consumer and financial markets, a predictable and transparent legal system, and a well-developed workforce, infrastructure, and technological environment.

A tax system informed by AI legal neutrality would not only improve commerce by eliminating inefficient subsidies for automation; it would help to ensure that the benefits of AI do not come at the expense of the most vulnerable, by leveling the playing field for human workers and ensuring adequate tax revenue.  AI is likely to result in massive but poorly distributed financial gains, and this will both require and enable policymakers to rethink how they allocate resources and distribute wealth. They may realize we are not doing such a good job of that now.


Ryan Abbott, is Professor of Law and Health Sciences at the University of Surrey School of Law and Adjunct Assistant Professor of Medicine at the David Geffen School of Medicine at UCLA