How democracies can claim back power in the digital world

Should Twitter censor lies tweeted by the US president? Should YouTube take down covid-19 misinformation? Should Facebook do more against hate speech? Such questions, which crop up daily in media coverage, can make it seem as if the main technologically driven risk to democracies is the curation of content by social-media companies. Yet these controversies are merely symptoms of a larger threat: the depth of privatized power over the digital world.

Every democratic country in the world faces the same challenge, but none can defuse it alone. We need a global democratic alliance to set norms, rules, and guidelines for technology companies and to agree on protocols for cross-border digital activities including election interference, cyberwar, and online trade. Citizens are better represented when a coalition of their governments—rather than a handful of corporate executives—define the terms of governance, and when checks, balances, and oversight mechanisms are in place.   

There’s a long list of ways in which technology companies govern our lives without much regulation. In areas from building critical infrastructure and defending it—or even producing offensive cyber tools—to designing artificial intelligence systems and government databases, decisions made in the interests of business set norms and standards for billions of people.

Increasingly, companies take over state roles or develop products that affect fundamental rights. For example, facial recognition systems that were never properly regulated before being developed and deployed are now so widely used as to rob people of their privacy. Similarly, companies systematically scoop up private data, often without consent—an industry norm that regulators have been slow to address.

Since technologies evolve faster than laws, discrepancies between private agency and public oversight are growing. Take, for example, “smart city” companies, which promise that local governments will be able to ease congestion by monitoring cars in real time and adjusting the timing of traffic lights. Unlike, say, a road built by a construction company, this digital infrastructure is not necessarily in the public domain. The companies that build it acquire insights and value that may not flow back to the public.

This disparity between the public and private sectors is spiraling out of control. There’s an information gap, a talent gap, and a compute gap. Together, these add up to a power and accountability gap. An entire layer of control of our daily lives thus exists without democratic legitimacy and with little oversight.

Why should we care? Because decisions that companies make about digital systems may not adhere to essential democratic principles such as freedom of choice, fair competition, nondiscrimination, justice, and accountability. Unintended consequences of technological processes, wrong decisions, or business-driven designs could create serious risks for public safety and national security. And power that is not subject to systematic checks and balances is at odds with the founding principles of most democracies.

Today, technology regulation is often characterized as a three-way contest between the state-led systems in China and Russia, the market-driven one in the United States, and a values-based vision in Europe. The reality, however, is that there are only two dominant systems of technology governance: the privatized one described above, which applies in the entire democratic world, and an authoritarian one.

To bring globe-spanning technology firms to heel, we need something new: a global alliance that puts democracy first.

The laissez-faire approach of democratic governments, and their reluctance to rein in private companies at home, also plays out on the international stage. While democratic governments have largely allowed companies to govern, authoritarian governments have taken to shaping norms through international fora. This unfortunate shift coincides with a trend of democratic decline worldwide, as large democracies like India, Turkey, and Brazil have become more authoritarian. Without deliberate and immediate efforts by democratic governments to win back agency, corporate and authoritarian governance models will erode democracy everywhere.

Does that mean democratic governments should build their own social-media platforms, data centers, and mobile phones instead? No. But they do need to urgently reclaim their role in creating rules and restrictions that uphold democracy’s core principles in the technology sphere. Up to now, these governments have slowly begun to do that with laws at the national level or, in Europe’s case, at the regional level. But to bring globe-spanning technology firms to heel, we need something new: a global alliance that puts democracy first.

Teaming up

Global institutions born in the aftermath of World War II, like the United Nations, the World Trade Organization, and the North Atlantic Treaty Organization, created a rules-based international order. But they fail to take the digital world fully into account in their mandates and agendas, even if many are finally starting to focus on digital cooperation, e-commerce, and cybersecurity. And while digital trade (which requires its own regulations, such as rules for e-commerce and criteria for the exchange of data) is of growing importance, WTO members have not agreed on global rules covering services for smart manufacturing, digital supply chains, and other digitally enabled transactions.

What we need now, therefore, is a large democratic coalition that can offer a meaningful alternative to the two existing models of technology governance, the privatized and the authoritarian. It should be a global coalition, welcoming countries that meet democratic criteria.

The Community of Democracies, a coalition of states that was created in 2000 to advance democracy but never had much impact, could be revamped and upgraded to include an ambitious mandate for the governance of technology. Alternatively, a “D7” or “D20” could be established—a coalition akin to the G7 or G20 but composed of the largest democracies in the world.

Such a group would agree on regulations and standards for technology in line with core democratic principles. Then each member country would implement them in its own way, much as EU member states do today with EU directives.

What problems would such a coalition resolve? The coalition might, for instance, adopt a shared definition of freedom of expression for social-media companies to follow. Perhaps that definition would be similar to the broadly shared European approach, where expression is free but there are clear exceptions for hate speech and incitements to violence.

Or the coalition might limit the practice of microtargeting political ads on social media: it could, for example, forbid companies from allowing advertisers to tailor and target ads on the basis of someone’s religion, ethnicity, sexual orientation, or collected personal data. At the very least, the coalition could advocate for more transparency about microtargeting to create more informed debate about which data collection practices ought to be off limits.

The democratic coalition could also adopt standards and methods of oversight for the digital operations of elections and campaigns. This might mean agreeing on security requirements for voting machines, plus anonymity standards, stress tests, and verification methods such as requiring a paper backup for every vote. And the entire coalition could agree to impose sanctions on any country or non-state actor that interferes with an election or referendum in any of the member states.

Another task the coalition might take on is developing trade rules for the digital economy. For example, members could agree never to demand that companies hand over the source code of software to state authorities, as China does. They could also agree to adopt common data protection rules for cross-border transactions. Such moves would allow a sort of digital free-trade zone to develop across like-minded nations.

China already has something similar to this in the form of eWTP, a trade platform that allows global tariff-free trade for transactions under a million dollars. But eWTP, which was started by e-commerce giant Alibaba, is run by private-sector companies based in China. The Chinese government is known to have access to data through private companies. Without a public, rules-based alternative, eWTP could become the de facto global platform for digital trade, with no democratic mandate or oversight.

Another matter this coalition could address would be the security of supply chains for devices like phones and laptops. Many countries have banned smartphones and telecom equipment from Huawei because of fears that the company’s technology may have built-in vulnerabilities or backdoors that the Chinese government could exploit. Proactively developing joint standards to protect the integrity of supply chains and products would create a level playing field between the coalition’s members and build trust in companies that agree to abide by them.

The next area that may be worthy of the coalition’s attention is cyberwar and hybrid conflict (where digital and physical aggression are combined). Over the past decade, a growing number of countries have identified hybrid conflict as a national security threat. Any nation with highly skilled cyber operations can wreak havoc on countries that fail to invest in defenses against them. Meanwhile, cyberattacks by non-state actors have shifted the balance of power between states.

Right now, though, there are no international criteria that define when a cyberattack counts as an act of war. This encourages bad actors to strike with many small blows. In addition to their immediate economic or (geo)political effect, such attacks erode trust that justice will be served.

A democratic coalition could work on closing this accountability gap and initiate an independent tribunal to investigate such attacks, perhaps similar to the Hague’s Permanent Court of Arbitration, which rules on international disputes. Leaders of the democratic alliance could then decide, on the basis of the tribunal’s rulings, whether economic and political sanctions should follow.

These are just some of the ways in which a global democratic coalition could advance rules that are sorely lacking in the digital sphere. Coalition standards could effectively become global ones if its members represent a good portion of the world’s population. The EU’s General Data Protection Regulation provides an example of how this could work. Although GDPR applies only to Europe, global technology firms must follow its rules for their European users, and this makes it harder to object as other jurisdictions adopt similar laws. Similarly, non-members of the democratic coalition could end up following many of its rules in order to enjoy the benefits.

If democratic governments do not assume more power in technology governance as authoritarian governments grow more powerful, the digital world—which is a part of our everyday lives—will not be democratic. Without a system of clear legitimacy for those who govern—without checks, balances, and mechanisms for independent oversight—it’s impossible to hold technology companies accountable. Only by building a global coalition for technology governance can democratic governments once again put democracy first.

Marietje Schaake is the international policy director at Stanford University’s Cyber Policy Center and an international policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence. Between 2009 and 2019, Marietje served as a Member of European Parliament for the Dutch liberal democratic party.

Deepfake Putin is here to warn Americans about their self-inflicted doom

The news: Two political ads will broadcast on social media today, featuring deepfake versions of Russian president Vladimir Putin and North Korean leader Kim Jong-un. Both deepfake leaders will be giving the same message: that America doesn’t need any election interference from them; it will ruin its democracy by itself.

What are they for? Yes, the ads sound creepy, but they’re meant for a good cause. They’re part of a campaign from the nonpartisan advocacy group RepresentUs to protect voting rights during the upcoming US presidential election, amid president Trump’s attacks on mail-in voting and suggestions that he may refuse a peaceful transition. The goal is to shock Americans into understanding the fragility of democracy as well as provoke them to take various actions, including checking their voter registration and volunteering for the polls. It flips the script on the typical narrative of political deepfakes, which experts often worry could be abused to confuse voters and disrupt elections.

How they were made: RepresentUs worked with the creative agency Mischief at No Fixed Address, which came up with the idea of using dictators to deliver the message. They filmed two actors with the right face shape and authentic accents to recite the script. They then worked with a deepfake artist who used an open-source algorithm to swap in Putin’s and Kim’s faces. A post-production crew cleaned up the leftover artifacts of the algorithm to make the video look more realistic. All in all the process took only 10 days. Attempting the equivalent with CGI likely would have taken months, the team says. It also could have been prohibitively expensive.

Are we ready? The ads were supposed to broadcast on Fox, CNN, and MSNBC in their Washington, DC, markets, but the stations pulled them last-minute from airing. A spokesperson for the campaign said they were still waiting on an explanation. The ads include a disclaimer at the end, stating: “The footage is not real, but the threat is.” But given the sensitive nature of using deepfakes in a political context, it’s possible the networks felt the American public just wasn’t ready.

Why security experts are braced for the next election hack-and-leak

When the New York Times published its blockbuster scoop about President Donald Trump’s tax returns, a lot of cybersecurity experts had traumatic flashbacks to four years ago.

Just a few weeks before the 2016 election, recordings were leaked of Trump on the set of Access Hollywood describing his strategy to sexually assault women. The news threatened to derail his presidential bid.

Less than an hour later, Wikileaks began publishing emails from the account of Hillary Clinton’s presidential campaign chair John Podesta, whose account had been hacked by Russian intelligence. 

The goal was to distract from the Access Hollywood tapes, and the tactic worked. 

Despite containing relatively little news for tens of thousands of pages of documents, the hacked-and-leaked emails eclipsed the tapes—in part because media, technology companies, and the government agencies were not prepared for such a well-planned Russian influence operation. The tens of thousands of pages of documents were enough to overwhelm the news cycle anyway. It proved just how vulnerable journalists and Silicon Valley were to this new twist on the old art of information warfare.

Since 2016, hack-and-leak operations have become far more common. Incidents have been spotted repeatedly in Saudi Arabia, the United Kingdom, France, and the United Arab Emirates. The outcomes have varied wildly, but the overall trend is clear: this has become a go-to tool for foreign nations looking to impact politics and elections.

“We’ve seen an uptick in these kinds of operations, first, because they’re easy to do,” says James Shires, a researcher at the Atlantic Center’s Cyber Statecraft Initiative. “It’s also deniable because of an unknown person or hacktivist who claims to be doing the leaking. And it’s within the rules of the game. It’s not clear what is permissible and not in terms of foreign interference in elections. It’s very clear that changing the vote count is beyond the red line most states set. But leaking information about political parties, it’s hard to measure the impact and it’s not clearly something states say don’t do and this is how we’ll respond. So there is a great opportunity, it’s deniable, and it’s subtle as well.”

The next operation

So is the United States any better prepared for this kind of information warfare during the 2020 election? 

The Russian hackers who carried out the 2016 operation were spotted targeting Democratic organizations just this month. When Facebook removed a Russia-linked influence operation last week, the head of security policy at the company explicitly warned about hack-and-leak operations. And last week Washington Post editor Marty Baron warned his staff about the perils of covering hacked material and laid out the new plan: Slow down and think more about the bigger picture. With the presidential election just 36 days away, the possibility of another distracting dump of hacked information looms large.

“The effect of a hacking operation really comes from the underlying political context—and in that case, the US is far worse now than it was in 2016.”

Shires, who researches hack-and-leaks, says that America has a mixed record. On one hand, the US government, political campaigns, press, and tech companies are more aware of the threat than in 2016. There have also been real investments and increases in cybersecurity protection. On the other hand, he points to that France responded in a very different way to similar attempts to interfere with its own election.

“The effect of a hacking operation really comes from the underlying political context and in that case the US is far worse now than it was in 2016,” Shires says. “If you look at the Macron leaks, which happened shortly before the French president was elected, a lot of things from the party were put online. French media got together, the candidate communicated, and they agreed not to publish stories based on these leaks before the election. There is a lot of trust and community spirit in the French media and political environment. That is clearly not the case in the US at the moment.”

Facing the same trap

Shires says a lot can be done to blunt the next operation. Traditional media can more thoughtfully control the tone and focus of their articles so that the hackers don’t so easily manipulate narratives. Social-media companies can, in some cases, control the virality of the hacked material being spread.

The situation quickly becomes more complex if the material is coming out of American newsrooms. That makes journalists key targets in these kinds of operations.

“The press is, to a degree, aware of how they were used and played in 2016,” says Bret Schafer, a media and digital disinformation researcher at the Alliance for Securing Democracy. “But collectively I don’t think we’re in a much better spot for a hack-and-leak operation. Facebook and Twitter policies now ban stolen material from being published on their platform, but that only bans it from its point of origin. If it’s placed somewhere else, a fringe site or a publication, then it can exist. And for obvious reasons we’re not going to look to Facebook to take down the New York Times if they report on hack-and-leak material.” 

“The tech companies are boxed in and reporters look at it asking if the information is authentic and of public interest. I’m hoping they don’t fall in the same trap of 2016 of pulling out more salacious details not of the public interest. But this is still the vector where we are most vulnerable.”

And how should ordinary voters prepare?

Be careful, says Shires. When presented with leaked information “it’s natural and valuable to read and learn.” 

“But the second level of how to treat this information is to think twice about why it’s in the public domain, who tried to put it there, who leaked it and for what purpose. This is media literacy, to understand the sourcing and the actors writing these stories and producing information behind these stories. If every member of the public is thinking twice about the content and sourcing, then we should get to a much more mature and responsible debate.”

The technology that powers the 2020 campaigns, explained

Campaigns and elections have always been about data—underneath the empathetic promises to fix your problems and fight for your family, it’s a business of metrics. If a campaign is lucky, it will find its way through a wilderness of polling, voter attributes, demographics, turnout, impressions, gerrymandering, and ad buys to connect with voters in a way that moves or even inspires them. Obama, MAGA, AOC—all have had some of that special sauce. Still, campaigns that collect and use the numbers best win.

That’s been true for some time, of course. In 2017, Hillary Clinton lamented that the Democratic National Committee had supplied her team with out-of-date data. She blamed this in part for her loss to Donald Trump, whose campaign sat atop an impressive Republican data-crunching machine. (The DNC retorted that it wasn’t the data, but how it was used, that was inadequate.)

In 2020, campaigns have added new wrinkles to their tactics for gathering and manipulating data. Traditional polling is giving way to AI-powered predictive modeling; massive data exchanges, once considered questionably legal, allow campaigns, PACs, and other groups to coordinate their efforts. And who can forget microtargeting? Both campaigns seek to arm themselves with comprehensive views of each potential voter and are using algorithms to segment and target voters more specifically and strategically. Here is our guide to what’s new and improved, and what it means for you, the voter.

Voter data galore

Over the last few years, campaigns have been steadily adding to the vast amount of personal information they keep on voters. That’s partly a result of a practice called acquisition advertising, in which campaigns run direct response ads that seek to get either contact information or opinions straight from a person. As of May, both presidential campaigns were spending upwards of 80% of their ad budgets on direct response ads.

Campaign officials don’t like to talk about exactly how much data they keep—but most voter files probably have somewhere between 500 and 2,500 data points per person. (A voter file is an integral data set that consolidates state-level voter registration info. Learn more about them here.) Each ad, phone call, email, and click increases that number. Since the Democratic Data Exchange (or DDx) came online in June, it has aggregated over a billion data points, most of which DDx says is contact information.

Contrary to what one might think, though, many of these personal details come from people who’ve already made up their minds about the candidates. The Trump campaign’s app, for example, allows automatic Bluetooth pairing that can help identify a user’s location—something that has drawn scrutiny. (Bluetooth beacons have been found in Trump yard signs in the past.) This kind of surveillance isn’t considered the norm, but it makes sense. People who download a candidate’s app probably already support that candidate, and committed voters are the most likely to donate.

Data exchanges

Data exchanges allow campaigns and PACs to share data, making outreach and messaging more efficient and comprehensive. Republicans have used Data Trust since 2013—it’s a one-stop shop that includes an exchange, voter data, and data hosting services. Democrats initially felt this was a violation of Federal Election Commission rules against cooperation between different types of political organizations, such as PACs, nonprofits, and the campaigns themselves. The American Democracy Legal Fund, a democratic group, sued DataTrust and lost … so naturally Democrats spun up their own version. That’s the Democratic Data Exchange that went live in June.

The promise of data exchanges is to let all aligned organizations share data. According to a demo given to the New York Times, DDx can produce a dashboard that shows how comfortable each voter is with voting by mail, and this is shared among all liberal groups in the exchange. In previous years, local canvassing groups, state parties, and issue-oriented PACs might all have been spending money in parallel collecting that kind of information. On the Republican side, Data Trust has proved its worth many times over. For example, it gathered information on voters who cast their ballots early during the 2018 midterm elections. Campaigns stopped reaching out to those people, saving a reported $100 million.

Next-level microtargeting

In ancient Rome, slaves were trained to memorize the names of voters who might be persuaded to vote for their master, so that he could find and greet them personally. These days,  the strategy behind personal targeting comes from computer models that can slice the electorate into highly specific groups. Messaging is honed using extensive A/B testing.

Social platforms vary in the kind of microtargeting they allow. Facebook lets campaigns target small groups and individuals. Through its “custom audience” feature, campaigns can upload a spreadsheet of users’ profiles and deploy their message with surgical precision. They can also leverage a tool called “look alike” that uses that custom lists to find profiles likely to respond in similar ways. (Here’s how you can opt out of that type of targeting.) Both presidential campaigns have been doing this, and a project out of New York University is tracking these type of advertisements. It shows, for example, that from July 30 to August 4, an ad splashed with the message “Our Recovery Will Be Made in America” appeared in the feeds of about 2,500 Facebook users in Wisconsin. Those users were selected specifically by profile name from a list uploaded by the Biden campaign. It’s nearly impossible to trace where this small list of names came from, though it was most likely purchased from a third party.

Other platforms are more restrictive. Google banned political microtargeting early this year, while Twitter has banned political ads from campaigns—though it allows ads from politically aligned advocacy groups.

Out with the polls, in with the AI models

You’ve probably heard: polls don’t work the way they used to. The 2016 presidential election touched off an industry crisis centered on the rise of the “non-response bias”—a fancy way of saying that cell-phones users tend not to answer calls from numbers they don’t recognize (like pollsters’), and that people have grown increasingly coy when asked about their political views.

In response, campaigns are turning to machine learning and AI to predict how voters will behave. Instead of relying on intermittent benchmarking of the populace, models are now run using continuously updated data sets. The most common technique campaigns use is called scoring, where a group of voters get assigned a number from 1 to 100 based on how likely they are to do something or hold a certain opinion. Campaigns use those likelihoods to inform their strategy, either by attempting to persuade undecided voters or by leveraging strongly held opinions for money or mobilization.

The models aren’t perfect. In 2016, they predicted Clinton’s win with a margin of error similar to that assumed in the polls. But models have an easier time overcoming some of the problems with polling, and  the more data the models ingest, the more accurate they are.

The result: No shared truths

As collective messaging fades in importance, it becomes harder to police the myriad tailored messages political groups are churning out and putting in front of voters. Personalized messaging means that each person’s view of a campaign differs, because each is taking in a different information stream. Embellishment, distortion, and outright lying become that much easier, especially for public figures, whose posts on social platforms often get special treatment. The technologies being fervently employed right now are enabling a reality in which campaigns can manufacture cleavages in the public, fundamentally altering how we form opinions and, ultimately, vote.

All is not lost. Though the 2020 election cycle is in its final stretches, public pressure to redirect these technologies is increasing. In a newly published study, the Pew Research Center showed that 54% of the American public doesn’t think social-media platforms should allow any political advertisements, while 77% of Americans believe data collected on social platforms shouldn’t be used for political targeting.

There are several bills in the Congress that reflect this sentiment, like the bipartisan Designing Accounting Safeguards to Help Broaden Oversight and Regulations on Data Act and the Banning Microtargeted Political Ads Act. These bills are due to be addressed in 2021, and experts think some form of regulation is likely, regardless of who wins the White House.

There might be even more underground reservoirs of liquid water on Mars

Four underground reservoirs of water may be sitting below the south pole of Mars. The new findings, published today in Nature Astronomy, suggest Mars is home to even more deposits of liquid water than once thought.

The background: In 2018, a group of Italian researchers used radar observations made by the European Space Agency’s Mars Express orbiter to detect a lake of liquid water sitting 1.5 kilometers below the surface of Mars. The lake, which was about 20 kilometers long, was found near the south pole, at the base of an area of thick glacial ice called the South Polar Layered Deposits. Those radar observations were made by an instrument called Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS).

The new study: Two years later, after a new analysis of the complete MARSIS data set (composing over 134 radar collection campaigns), members of that same team have confirmed the presence of that body of water. But they have also found evidence of three others, each less than 50 kilometers away from the location of the first. The new analysis applies lessons learned in discriminating between wet and dry subglacial conditions in radar data for Antarctica and Greenland. 

The newly discovered patches of water don’t seem to be much different from the one found in 2018. They range from an estimated 10 to 30 kilometers in length. They all start at a depth of about 1.5 kilometers underground, although it’s still unknown how deep any of them actually run.

The water: Don’t expect to be able to drink this water. The only reason it’s been able to stay liquid despite frigid temperatures on Mars is that it’s likely very briny (or salty). Salts can significantly lower the freezing point of water. Calcium, magnesium, sodium, and other salt deposits are found globally on Mars, and previous experiments suggest that brines can easily form in subpolar regions there. It’s plausible they’ve allowed these lakes to remain stable over potentially billions of years. 

So what? Access to water is going to be a big deal for future Martian colonists. But even if this water could be desalinated, accessing it would require intense drilling. There’s plenty more surface ice at the Martian poles that’s much easier to harvest. 

Instead, the most exciting thing about these underground lakes is that they could be home to extraterrestrial life. It’s possible that, just like on Earth, some microbial life has evolved to withstand the extreme conditions of these salty subglacial lakes and made a home for itself.

The best way to investigate this further is to directly study the waters. Elena Pettinelli, a physicist at Roma Tre University in Rome and a coauthor of the new study, says a lander or rover platform would be best suited to this task. The biggest problem, of course, is getting to those depths. One way around the issue could be to measure seismic activity, which could shed light on the full depth and geometry of the water bodies and shed some light on which parts are mostly likely to habitable. But seismic observations would still fall very short of telling us anything definitive about whether life exists on Mars.

SpaceX’s Starlink satellites could make US Army navigation hard to jam

SpaceX has already launched more than 700 Starlink satellites, with thousands more due to come online in the years ahead. Their prime mission is to provide high-speed internet virtually worldwide, extending it to many remote locations that have lacked reliable service to date.

Now, research funded by the US Army has concluded that the growing mega-constellation could have a secondary purpose: doubling as a low-cost, highly accurate, and almost unjammable alternative to GPS. The new method would use existing Starlink satellites in low Earth orbit (LEO) to provide near-global navigation services. 

In a non-peer-reviewed paper, Todd Humphreys and Peter Iannucci of the Radionavigation Laboratory at the University of Texas at Austin claim to have devised a system that uses the same satellites, piggybacking on traditional GPS signals, to deliver location precision up to 10 times as good as GPS, in a system much less prone to interference. 

Weak signals

The Global Positioning System consists of a constellation of around 30 satellites orbiting 20,000 kilometers above Earth. Each satellite continuously broadcasts a radio signal containing its position and the exact time from a very precise atomic clock on board. Receivers on the ground can then compare how long signals from multiple satellites take to arrive and calculate their position, typically to within a few meters. 

The problem with GPS is that those signals are extremely weak by the time they reach Earth, and are easily overwhelmed by either accidental interference or electronic warfare. In China, mysterious GPS attacks have successfully “spoofed” ships in fake locations, while GPS signals are regularly jammed in the eastern Mediterranean.

The US military relies heavily on GPS. Last year, the US Army Futures Command, a new unit dedicated to modernizing its forces, visited Humphreys’s lab to talk about a startup called Coherent Navigation he had cofounded in 2008. Coherent, which aimed to use signals from Iridium satellites as a rough alternative to GPS, was acquired by Apple in 2015. 

“They told me the Army has a relationship with SpaceX [it signed an agreement to test Starlink to move data across military networks in May] and would I be interested in talking to SpaceX about using their Starlink satellites the same way that I used these old Iridium satellites?” Humphreys says. “That got us an audience with people at SpaceX, who liked it, and the Army gave us a year to look into the problem.” Futures Command also provided several million dollars in funding

The concept of using LEO satellites for navigation isn’t new. In fact, some of the first US spacecraft launched in the 1960s were Transit satellites orbiting at 1,100 kilometers, providing location information for Navy ships and submarines. The advantage of an LEO constellation is that the signals can be a thousand times stronger than GPS. The disadvantage is that each satellite can serve only a small area beneath it, so that reliable global coverage requires hundreds or even thousands of satellites. 

Upgrade and enhance

Building a whole new network of LEO satellites with ultra-accurate clocks would be an expensive undertaking. Bay Area startup Xona Space Systems plans to do just that, aiming to launch a constellation of at least 300 Pulsar satellites over the next six years.

Humphreys and Iannucci’s idea is different: they would use a simple software upgrade to modify Starlink’s satellites so their communications abilities and existing GPS signals could provide position and navigation services .

They claim their new system can even, counterintuitively, deliver better accuracy for most users than the GPS technology it relies upon. That is because the GPS receiver on each Starlink satellite uses algorithms that are rarely found in consumer products, to pinpoint its location within just a few centimeters. These technologies exploit physical properties of the GPS radio signal, and its encoding, to improve the accuracy of location calculations. Essentially, the Starlink satellites can do the heavy computational lifting for their users below. 

The Starlink satellites are also essentially internet routers in space, capable of achieving 100 megabits per second. GPS satellites, on the other hand, communicate at fewer than 100 bits per second

“There are so few bits per second available for GPS transmissions that they can’t afford to include fresh, highly accurate data about where the satellites actually are,” says Iannucci. “If you have a million times more opportunity to send information down from your satellite, the data can be much closer to the truth.”

The new system, which Humphreys calls fused LEO navigation, will use instant orbit and clock calculations to locate users to within 70 centimeters, he estimates. Most GPS systems in smartphones, watches, and cars, for comparison, are only accurate to a few meters. 

But the key advantage for the Pentagon is that fused LEO navigation should be significantly more difficult to jam or spoof. Not only are its signals much stronger at ground level, but the antennas for its microwave frequencies are about 10 times more directional than GPS antennas. That means it should be easier to pick up the true satellite signals rather than those from a jammer.  “At least that’s the hope,” says Humphreys.

According to Humphreys and Iannucci’s calculations, their fused LEO navigation system could provide continuous navigation service to 99.8% of the world’s population, using less than 1% of Starlink’s downlink capacity and less than 0.5% of its energy capacity.

“I do think this could lead to a more robust and accurate solution than GPS alone,” says Todd Walter of Stanford University’s GPS Lab, who was not involved with the research. “And if you don’t have to modify Starlink’s satellites, it certainly is a fast, simple way to go.”

Nor is the navigation technology limited to just SpaceX’s satellites. The bankrupt OneWeb constellation that the UK government is purchasing could also serve as a home-grown navigation system, says Iannucci, “although Starlink is in pole position right now.”

Fused LEO navigation does have its drawbacks, however. The initial Starlink mega-constellation is not expected to operate above 60 degrees latitude, meaning that residents of Helsinki might miss out on its benefits, as would soldiers in any future disputed Arctic or Antarctic regions.  

Using the system on the ground would also mean relying upon SpaceX’s own Starlink antenna—described by CEO Elon Musk as looking like a UFO on a stick, and probably very expensive—rather than cheap GPS chips that can fit into smartphones and watches. Any future fused LEO navigation service would, unlike GPS, come with a significant price tag as well, not least because SpaceX needs to start seeing a return on its huge investment in StarlinkFor these reasons, not everyone thinks it’s the way forward.

“We looked at this approach a long time ago, and neither the commercial nor the technical capabilities really made sense, which is why we’re working on an independent constellation,” says Xona CEO Brian Manning.

Neither the US Army Futures Command nor SpaceX responded to requests for comment, but the UT researchers are hoping that Musk will see the value of the new technology. “There’s a potential here to really change navigation worldwide,” says Iannucci. 

Correction: We amended the headline.

How to plan your life during a pandemic

The covid-19 pandemic shocked the world and generated high levels of economic, political, and social uncertainty. And for many people, the virus compounded the growing sense of uncertainty they already felt in their lives as a result of automation, geopolitical tensions, and widening inequalities.

With the many sudden changes that covid-19 has brought, planning for the future can feel impossible. Even short-term decisions—What will we do this weekend? Should I send my kids to school?—now require us to process a broad set of data and considerations. Trying to envision life months or a couple of years down the road may seem futile or even foolish.

When faced with high degrees of uncertainty, we tend to worry about all that might happen, and often do so in an unstructured manner. This kind of worry can spur knee-jerk reactions and inhibit sound decision-making, which is especially problematic in the middle of a global crisis when so much is at stake.

Strategic foresight offers an alternative to unproductive worry. It’s a way of thinking that uses alternative futures to guide the decisions we make today. This tool can help us better anticipate possible circumstances and—importantly—adapt when those circumstances threaten our ability to achieve our goals.

Strategic foresight can be a powerful tool to help you understand and evaluate your options even when the future seems very unclear. I use this practice every day in my work, and I believe it can also help people navigate their personal and professional lives during the pandemic.

The good news is, we often practise foresight without even realizing it. You’re doing it, for example, every time you leave the house and decide whether or not to grab an umbrella. But we can make a more explicit effort to think ahead in times of greater uncertainty, or when we’re feeling particularly anxious about what’s to come.

Here’s how to start applying this practice in your own life:

Clarify your goals. Defining a vision is a crucial first step, and an especially productive one for those of us who suddenly find our work or mission in peril. A vision can be a preferred future, a desired outcome, or just an idea of what you need to sustain yourself through a difficult time.

For example, in the face of economic instability brought on by covid-19, your vision may be financial sustainability—or even just survival—over the coming months and years. This might translate to a goal of earning enough income to support yourself and your loved ones.

Consider what futures you might face. Develop scenarios to explore the future world in which your decisions will play out. Scenarios are plausible futures that are strategically relevant and structurally different. They include elements from the past that carry forward, such as existing trends and established commitments, along with new components, such as business models, technologies, or value systems that may soon emerge.

To continue our example, you could create scenarios that consider different shapes for the eventual economic recovery—taking into account what jobs might disappear, change, or bloom, as well as factors like whether and how much government support might be available, if you were to need it.

Identify the implications. Once you have your scenarios, answer these questions: What threats would you face in each? What challenges or opportunities would emerge? Which of your strengths and weaknesses do these scenarios highlight? What new questions do they raise for you? Be systematic, answering each question for each scenario.

In our example, your implications may relate to the value of your assets and the economic opportunities that would be available to you in different scenarios.

Make your assumptions explicit and examine their validity. Our planning assumptions are often implicit, which makes it hard to examine or challenge them. Make them explicit by writing them down, and then sort your assumptions into three categories: those that are credible and should guide your planning; those that should be researched further; and those that are unlikely to become reality.

In our example, counting on being able to return to your pre-covid life might be a dangerous assumption. Your job might change, or not come back at all, even once covid-19 is under control. Automation might have made your job redundant, or digital alternatives to the product or service your company produced might have become the new normal.

Review your options, plans, and decisions. Start to devise your plan of action. What will you do when you arrive in an alternative future? What could you do now that would make you more resilient to possible challenges? What skills or capabilities can you start building? What small investments can you make today, to avoid having to invent solutions when you find yourself in a world very different from the current one?

Strategic foresight helps us look beyond the current situation to what might follow, and figure out how we can prepare for that. For example, you might consider training for a skill that will be valuable in the future—and ideally, choose one that would hold its value in multiple future scenarios.

Monitor and adapt. Establish a system to monitor early warning signs that signal which of your possible futures is actually emerging. This allows you to adapt your course of action as early as possible, or pursue the best options for yourself.

For example, interest rates, employment rates in your industry, consumer and corporate confidence scores, and the availability of covid-19 treatments or vaccines could all be potential early warning signs for which of your possible futures is most likely to play out.

As you put these techniques into practice, it’s best to try to adopt a strategic foresight mindset:

Accept uncertainty as the norm. Forecasting has value, but prudent people and organizations do not bet everything on having things turn out as expected. Instead, they prepare for a wide range of plausible scenarios to avoid finding themselves in situations that they are unprepared to manage, or where they need to invent solutions and implement them at the same time.

Be humble about your ability to manage in the moment. Thoughts like “We’ll handle it when it happens,” or “This is only temporary, and things will get back to normal soon,” are common examples of wishful thinking.

Take off the blinders. The covid-19 pandemic has shown how quickly the world can change. Such massive disruptions are not as rare as we would like to think. Disruptive change can seem as though it appears suddenly and without warning, but the threat was probably there all along. We may have downplayed its potential magnitude or discounted its odds. Cultivating a broader perspective about what might happen in the future will prompt you to revisit your own deeply held assumptions.

Be brave. Keep all relevant plausible scenarios on the table, whether you like them or not, even if they scare you. Too often, we ignore scenarios that we judge to be low in probability but high in impact, especially if they seem difficult to prepare for.

Keep an eye open for opportunities. In times of high uncertainty and crisis, we tend to revert to playing defense, and focus on what might go wrong. Dedicating some attention to the positive circumstances that could emerge from a crisis can help you identify new opportunities.

Recognize the emotional journey. Engaging with scenarios can challenge your assumptions and at times feel like a threat to your knowledge and expertise. Strengthening your strategic foresight muscle, however, will build your capacity to be decisive despite uncertainty and discomfort—and ultimately, to become more “future fit.”

Give it action. Link your reflections about the future to actual decision-making and actions. Strategic foresight is there to enable you to make better-informed choices, and to reflect on the future in an action-oriented manner. So be ready to make changes based on what you learn.

Kristel Van der Elst is CEO of the Global Foresight Group, director general at Policy Horizons Canada, a special advisor to European Commission Vice-President Maroš Šefčovič, and a fellow at the Center for Strategic Foresight of the US Government Accountability Office. She is a visiting professor at the College of Europe, and the former head of strategic foresight at the World Economic Forum.