This US company sold iPhone hacking tools to UAE spies

When the United Arab Emirates paid over $1.3 million for a powerful and stealthy iPhone hacking tool in 2016, the monarchy’s spies—and the American mercenary hackers they hired—put it to immediate use.

The tool exploited a flaw in Apple’s iMessage app to enable hackers to completely take over a victim’s iPhone. It was used against hundreds of targets in a vast campaign of surveillance and espionage whose victims included geopolitical rivals, dissidents, and human rights activists. 

Documents filed by the US Justice Department on Tuesday detail how the sale was facilitated by a group of American mercenaries working for Abu Dhabi, without legal permission from Washington to do so. But the case documents do not reveal who sold the powerful iPhone exploit to the Emiratis.

Two sources with knowledge of the matter have confirmed to MIT Technology Review that the exploit was developed and sold by an American firm named Accuvant. It merged several years ago with another security firm, and what remains is now part of a larger company called Optiv. News of the sale sheds new light on the exploit industry as well as the role played by American companies and mercenaries in the proliferation of powerful hacking capabilities around the world.

Optiv spokesperson Jeremy Jones wrote in an email that his company has “cooperated fully with the Department of Justice” and that Optiv “is not a subject of this investigation.” That’s true: The subjects of the investigation are the three former US intelligence and military personnel who worked illegally with the UAE. However, Accuvant’s role as exploit developer and seller was important enough to be detailed at length in Justice Department court filings.

The iMessage exploit was the primary weapon in an Emirati program called Karma, which was run by DarkMatter, an organization that posed as a private company but in fact acted as a de facto spy agency for the UAE. 

Reuters reported the existence of Karma and the iMessage exploit in 2019. But on Tuesday, the US fined three former US intelligence and military personnel $1.68 million for their unlicensed work as mercenary hackers in the UAE. That activity included buying Accuvant’s tool and then directing UAE-funded hacking campaigns.

The US court documents noted that the exploits were developed and sold by American firms but did not name the hacking companies. Accuvant’s role has not been reported until now.

“The FBI will fully investigate individuals and companies that profit from illegal criminal cyber activity,” Bryan Vorndran, assistant director of the FBI’s Cyber Division, said in a statement. “This is a clear message to anybody, including former US government employees, who had considered using cyberspace to leverage export-controlled information for the benefit of a foreign government or a foreign commercial company—there is risk, and there will be consequences.”

Prolific exploit developer

Despite the fact that the UAE is considered a close ally of the United States, DarkMatter has been linked to cyberattacks against a range of American targets, according to court documents and whistleblowers

Helped by American partnership, expertise, and money, DarkMatter built up the UAE’s offensive hacking capabilities over several years from almost nothing to a formidable and active operation. The group spent heavily to hire American and Western hackers to develop and sometimes direct the country’s cyber operations.

At the time of the sale, Accuvant was a research and development lab based in Denver, Colorado, that specialized in and sold iOS exploits.

“The FBI will fully investigate individuals and companies that profit from illegal criminal cyber activity. This is a clear message to anybody… there is risk, and there will be consequences.”

Brandon Vorndran, FBI

A decade ago, Accuvant established a reputation as a prolific exploit developer working with bigger American military contractors and selling bugs to government customers. In an industry that typically values a code of silence, the company occasionally got public attention. 

“Accuvant represents an upside to cyberwar: a booming market,” journalist David Kushner wrote in a 2013 profile of the company in Rolling Stone. It was the kind of company, he said, “capable of creating custom software that can enter outside systems and gather intelligence or even shut down a server, for which they can get paid up to $1 million.”

Optiv largely exited the hacking industry following the series of mergers and acquisitions, but Accuvant’s alumni network is strong—and still working on exploits. Two high-profile employees went on to cofound Grayshift, an iPhone hacking company known for its skills at unlocking devices.

Accuvant sold hacking exploits to multiple customers in both governments and the private sector, including the United States and its allies—and this exact iMessage exploit was also sold simultaneously to multiple other customers, MIT Technology Review has learned.

iMessage flaws

The iMessage exploit is one of several critical flaws in the messaging app that have been discovered and exploited over recent years. A 2020 update to the iPhone’s operating system shipped with a complete rebuilding of iMessage security in an attempt to make it harder to target.

The new security feature, called BlastDoor, isolates the app from the rest of the iPhone and makes it more difficult to access iMessage’s memory—the main way in which attackers were able to take over a target’s phone.

iMessage is a major target of hackers, for good reason. The app is included by default on every Apple device. It accepts incoming messages from anyone who knows your number. There is no way to uninstall it, no way to inspect it, nothing a user can do to defend against this kind of threat beyond downloading every Apple security update as soon as possible.

BlastDoor did make exploiting iMessage harder, but the app is still a favorite target of hackers. On Monday, Apple disclosed an exploit that the Israeli spyware company NSO Group had reportedly used to circumvent BlastDoor protections and take over the iPhone through a different flaw in iMessage. Apple declined to comment.

Inspiration4: Why SpaceX’s first all-private mission is a big deal

When 2001: A Space Odyssey was released in 1968, it didn’t feel like a stretch to dream of lounging in a space hotel, sipping a martini while watching Earth drift by. This vision got a boost in the early 1980s, when the space shuttle program heralded a future of frequent and routine trips to orbit. And when the first paying space tourists rocketed into space in the 2000s, many began wondering when they too might afford a trip to space.

There have been countless starry-eyed visions of a future where ordinary people, non-astronauts without billions of dollars in wealth, can travel to space. For all those moments of optimism, however, those dreams have never quite been realized. Space travel has, for the most part, remained the remit of professional astronauts or the very wealthy.

Yet—and whisper it very cautiously—that might be set to change. Later today, at 8:02 pm US Eastern time, a SpaceX Falcon 9 rocket is scheduled to take off from Cape Canaveral in Florida. On board will be a crew of four, the same number as on as Elon Musk’s company’s last two crewed missions—themselves historic milestones. The major difference, this time, is that none of the occupants are trained astronauts. They are private citizens, launching on a private rocket, built by a private company. NASA will be nowhere to be seen.

Inspiration4, as the mission is known, has been lauded as a seismic moment in human spaceflight. It is the first all-private mission to be launched to orbit, paid for by US technology billionaire Jared Isaacman to raise funds for St. Jude Children’s Research Hospital in Memphis, at an estimated cost of $200 million.

Traveling with him are three very much not-billionaires: Hayley Arceneaux, a cancer survivor and physician assistant; Chris Sembroski, a Lockheed Martin employee whose friend won a competition for the seat and gave him the ticket; and Sian Proctor, a professor of geoscience who also competed for her seat. “These people represent humanity,” says Laura Forczyk of the space consulting firm Astralytical. “They’re ambassadors.”

Non-astronauts have gone to space before. From 2001 to 2009, seven people paid upwards of $30 million a seat for trips to the International Space Station on Russian Soyuz rockets. More recently, in July of this year, billionaires Richard Branson and Jeff Bezos made short suborbital hops into space, each lasting several minutes, on spacecraft built by their own companies.

Yet never before have people traveled to orbit without being propelled by their wealth and without the oversight of a national space agency such as NASA. “This is the first privately operated orbital spaceflight to have all private citizens as its passengers,” says spaceflight expert Jonathan McDowell of the Harvard-Smithsonian Center for Astrophysics. “Compared to the suborbital [flights], it’s so much more ambitious.”

Rather than docking with the International Space Station (ISS) like SpaceX’s other crewed missions, the mission’s Crew Dragon spacecraft will instead remain in Earth orbit for three days under its own power  The crew will eat, drink, sleep, and use the toilet within the confines of their spacecraft, named Resilience, which boasts about three times the interior volume of a large car. To keep them occupied, the docking port of the spacecraft, which would normally be used to connect to the ISS, has been converted into a glass dome, affording the crew glorious panoramic views of Earth and the universe beyond.

Beyond this, the goals of the mission are limited. There are some scientific experiments planned, but the most notable aspect of the mission is what will not happen. In particular, none of the crew will directly pilot the spacecraft. Instead, it will be controlled autonomously and with the help of mission control back down on Earth. That is not a trivial change, explains McDowell, and there are risks involved. “For the first time, if the automatic systems don’t work, you could be in real trouble,” he says. “What this shows is the increased confidence in the software and automatic control systems that allow you to fly tourists without a chaperone.” 

All of this combines to make the launch of Inspiration4 an exciting moment in human spaceflight, albeit one that has been tentatively attempted before. In the 1980s, NASA had hoped to begin something similar—the Space Flight Participant Program, an effort to give various private citizens the opportunity to fly to space on the space shuttle. “It was felt that some of the astronauts were a little reserved in their descriptions of the flight,” says author Alan Ladwig, who led the program. NASA wanted people who could communicate the experience better and selected a teacher, a journalist, and an artist.

The program, however, came to a tragic end. Its first participant, Christa McAuliffe, a teacher from New Hampshire, died in the space shuttle Challenger explosion of 1986, along with the other six members of the crew. The program was canceled, and the space shuttle program as a whole stagnated. Experts once envisioned it would fly hundreds of missions a year, but only 110 more launches took place in the subsequent 25 years, until the shuttles were retired in 2011.

The majority of space travel will remain the remit of professional astronauts and the extremely wealthy for the time being. If you’re not rich, you will still be limited to applying for competitions or hoping for a ticket from a wealthy benefactor—perhaps not the glorious future of space travel many envisioned.

But Inspiration4 shows that opportunities for more “regular” people to go to space, though few and far between, are available. “It is a milestone in human access,” says space historian John Logsdon, professor emeritus at George Washington University’s Space Policy Institute. “In a very simplistic sense, it means anybody can go.”

You won’t be flying in a Pan Am space plane on your way to a giant rotating space hotel just yet, but who’s to say what the future might hold. “This is a brand-new industry in its infancy, and we’re seeing the first steps,” says Forczyk. “We don’t know how far it’s going to run.”

Activists are helping Texans get access to abortion pills online

At the end of August, KT Volkova got an abortion in central Texas, where they live. KT was nearly six weeks pregnant. 

“Time was of the essence,” they say. Just a few days later, on September 1, SB8 became law in Texas. SB8 effectively bans abortion in the state by making the procedure illegal when a heartbeat is detected, usually around six weeks after someone’s last period (unpredictable cycles mean that many people don’t know they are pregnant at that point). SB8 also offers a $10,000 bounty on those who help someone have an abortion within the state after the six-week mark. 

Volkova was one of the lucky ones. An untold number of pregnant people in Texas are now stranded, unable to access a safe abortion within the state’s borders. Now activists are fighting back. Money-raising efforts for people trying to fund out-of-state abortions have taken off within Texas. Citizen activists are spamming bounty sites with fake reports.

And for pregnant Texans needing an abortion now, nonprofits are stepping in to help. Aid Access, which helps provide access to abortion pills online, has seen a spike in requests since the bill passed. The pills are mifepristone—which blocks progesterone, a hormone needed to maintain a pregnancy—and misoprostol, which induces a miscarriage.

“We definitely saw an increase after September 1 from Texas,” says Christie Pitney, a midwife who volunteers with Aid Access. Some people are even stockpiling abortion pills in Texas in case they need them in the future. She says Aid Access is now looking to add more volunteers to take calls in the state. 

The process only requires an internet connection: patients go online and answer some HIPAA-compliant questions about their pregnancy, such as when the first day of their last period was. If it’s a straightforward case, it’s approved by the doctor—there are seven American doctors covering 15 states—and the medication arrives in a few days. In places like Texas, where Aid Access doesn’t have doctors in state, Aid Access founder Rebecca Gomperts prescribes the medication from Europe, where she is based. That can take around three weeks, Pitney says. 

The ability to get a safe, discreet abortion at home with just an internet connection could be life-changing for Texans and others in need. “It’s really changed the face of abortion access,” says Elisa Wells, the cofounder of Plan C, which provides information and education about how to access the pills.

In Texas, the need is especially acute because cultural stigma and an existing history of restrictive laws means there are very few in-person clinics available. Before the recent law change, Texans were three times more likely than the national average to use abortion pills, because abortion clinics were so far away. 

“In a situation like Texas, where mainstream avenues of access have been almost entirely cut off, it is a solution,” says Wells, who describes much of Texas as an “abortion desert.” Black and Hispanic people often have less access to medical care, and so the ability to access abortion pills online is vital for these communities.

They’re also much cheaper than medical abortions, with most pills costing $105 to $150 plus a required online consultation, depending on which state you live in. (Aid Access forgives some or all of the payment if necessary.) 

But while they’re commonly prescribed in other countries (they’re used in around 90% of abortions in France and Scotland, for example), only 40% of American abortions use pills. In fact, using the pills in the US to “self-manage an abortion” can lead to charges in at least 20 states, including Texas, and has been the basis for the arrest of 21 people since 2000. Aid Access’s use of Gomperts to write prescriptions as a foreign doctor has come under federal investigation by the FDA, which the group challenged. The situation remains unresolved. 

Reaching people in need is another challenge, so activists often target Instagram and TikTok to pass on information about the pills. They use Instagram slideshows to show how they work and hashtags that make searching for information easier. 

But not everyone is happy about the work that these groups do. In just the past week, Wells says, Plan C’s Instagram page has been shut down multiple times after someone reported it. “We appealed it,” she says. “The language was general—something about violating the terms and conditions. Do they think we are selling medication? Because we’re not. Are we doing something illegal? It’s freedom of speech, so no.” An Instagram spokesperson confirmed that Plan C was taken offline but said the page was “mistakenly disabled.”

Another problem on Instagram has been unsearchable hashtags, particularly for #mifepristone and #misoprostol. Instagram refused to answer questions about this on the record.

The Instagram shutdowns has led abortion organizations to reverti to pre-internet tactics that are harder to censor and reach those without internet access. These include using a giant billboard on a truck to drive through Texan towns, operating 24-hour hotlines, distributing stickers and zines to locals to plaster around public spaces, and other “guerrilla marketing techniques.”

Volkova, who also had a previous abortion, has refocused their energy on activism, working with Buckle Bunnies, a grassroots group collecting funds for abortions in Texas. 

“For a long time, I felt ashamed to speak about my abortion,” they say. “But I realized there is so much love and support in organizations that do this work, and this love has empowered me to share my story and continue working in expanding abortion access.”

Correction: In the original version of this story we misgendered KT, who uses they/them pronouns. We have now corrected this. We apologize for the error.

Why Facebook is using Ray-Ban to stake a claim on our faces

Last week Facebook released its new $299 “Ray-Ban Stories” glasses. Wearers can use them to record and share images and short videos, listen to music, and take calls. The people who buy these glasses will soon be out in public and private spaces, photographing and recording the rest of us, and using Facebook’s new “View” app to sort and upload that content.

My issue with these glasses is partially what they are, but mostly what they will become, and how that will change our social landscape.

How will we feel going about our lives in public, knowing that at any moment the people around us might be wearing stealth surveillance technology? People have recorded others in public for decades, but it’s gotten more difficult for the average person to detect, and Facebook’s new glasses will make it harder still, since they resemble and carry the Ray-Ban brand.

That brand’s trusted legacy of “cool” could make Facebook’s glasses appeal to many more people than Snap Spectacles and other camera glasses. (Facebook also has roughly 2 billion more users than Snapchat.) And Facebook can take advantage of the global supply chain and retail outlet infrastructure of Luxottica, Ray-Ban’s parent company. This means the product won’t have to roll out slowly—even worldwide.

Facebook’s glasses could become especially popular during these pandemic times, for they offer a way to record images and sounds without needing to touch a phone or any other surface. They may also be a hit with parents who need to pay attention to their kids but still want to capture spontaneous moments.

At first glance, recording with Facebook’s glasses may not seem much different from snapping a photo or video with a smartphone. However, the way the glasses cover the wearer’s eyes and create photos and videos from that person’s viewpoint changes what such activity means for social groups.

With this product, Facebook is claiming the face as real estate for its own technology. The glasses will become a perpetual viewfinder, emphasizing each wearer’s perspective over the experience of being in any group. As a result, people wearing them may be more drawn to capturing scenes from their unique point of view than actually participating. Also, since more than one person at a time might be wearing the glasses in any given group, this effect could be magnified, and social cohesion could be further fragmented.

Earlier this year, I wrote an ethics paper with Catherine Flick of De Montfort University in the UK, which was published in the May 2021 Journal of Responsible Technology. We argued that the unbridled deployment of “smart glasses” raises serious unforeseen questions about the future of public social interaction.

Ray-Ban Stories are a step toward Mark Zuckerberg’s long-term vision for Facebook, which is to realize and participate in the “metaverse.” Venture capitalist Matthew Ball describes the metaverse as a space of “unprecedented interoperability” with a seamless, integrated economy. Zuckerberg explained it as a shared space that unifies many companies and mediated experiences, including real, virtual, and augmented worlds.

Zuckerberg calls Ray-Ban Stories “one milestone on the path” to immersive augmented-reality (AR) glasses. In 2020, Facebook announced Project Aria, which uses AR-enabled glasses to map the terrain of the public and some private spaces. This mapping effort intends to build up geolocation information and intellectual property to feed the data needs of future AR glasses wearers—and likely advance Facebook’s contribution to the metaverse. As Zuckerberg mentioned in a video introducing Ray-Ban Stories, he plans to ultimately replace mobile phones with Facebook smart glasses.

Glasses provide different social cues than smartphones. We can tell who is on a phone because we can see the phone in people’s hands. Figuring out who is wearing Facebook’s glasses will be more challenging. In part, the Google Glass experiment failed because Glass looked different from normal eyewear, and we could easily identify and avoid those wearing it. But Ray-Ban Stories look a lot like normal Ray-Bans.

With Ray-Ban Stories, we can’t always know who is recording, when or where they are doing it, or what will happen to the data they collect. A small light indicates that the glasses are recording, but that isn’t visible from far away. There’s a quiet “shutter” sound when the person wearing the glasses takes a photo, but it’s hard for others to hear. Even if they do hear it, not knowing what someone intends to do with a recording could cause anyone who is privacy-conscious to worry.

Facebook’s View app “promises to be a safe space,” according to one review, but uploading data through the View app to other Facebook apps makes it unclear which privacy policies apply and how content the glasses record could ultimately be used. People using Ray-Ban Stories may also be subjected to additional surveillance. The View app states that a wearer’s voice commands could be recorded and shared with Facebook to “improve and personalize [the wearer’s] experience.” The user must opt out to avoid this.

When some (but not all) of the people we interact with are cloaked in Ray-Ban Stories, we may not be able to fully cooperate with each other. We may not want to be recorded. Or if we don’t own Facebook’s glasses, or aren’t on Facebook, we may not be able to participate in social activities in the same way as those with Ray-Ban Stories.

To date, Facebook hasn’t had a portable consumer hardware device in the market that works with a mobile phone and back-end software, and it’s clear the company is new at this. It lists only five “responsibility” rules for people who purchase the glasses. Believing that people will actually comply with these rules is either naïve or very optimistic.

These glasses are Facebook’s first step toward building a complete hardware ecosystem for the company’s coming attempts at creating the metaverse. With Ray-Ban Stories, it has gained new capabilities to collect data about people’s behavior, location, and content—even if the company doesn’t use that information yet—as it works toward loftier goals.

While Facebook conducts an enormous beta test in our public spaces, concerned people will be even more on guard in public and may even take evasive measures, such as wearing hats or glasses, or turning away from anyone wearing Ray-Bans. If Facebook adds facial recognition to these glasses in the future, as the company is reportedly considering, people will have to find new countermeasures. This robs us of our peace.

Ray-Ban Stories are now for sale in the US, Canada, the UK, Ireland, Italy, and Australia. How people use and respond to the device will vary wildly across countries that have different social norms, values, laws, and expectations of privacy. Facebook may be one of the first companies to attempt to deploy smart camera glasses, but it will not be the last. Many other versions will follow, and we’ll need to look out not just for Ray-Bans, but for all types of devices recording us in more subtle ways. 

Now go out and get yourself some big black frames,
With the glass so dark they won’t even know your name,
And the choice is up to you cause they come in two classes,
Rhinestone shades or cheap sunglasses.

—ZZ Top 

S.A. Applin is an anthropologist and senior consultant whose research explores the domains of human agency, algorithms, AI, and automation in the context of social systems and sociability. You can find more at @anthropunk, sally.com, and PoSR.org.

Pandemic tech left out public health experts. Here’s why that needs to change.

Exposure notification apps were developed at the start of the pandemic, as technologists raced to help slow the spread of covid. The most common system was developed jointly by Google and Apple, and dozens of apps around the world were built using it—MIT Technology Review spent much of 2020 tracking them. The apps, which run on ordinary smartphones and rely on Bluetooth signals to operate, have weathered plenty of criticism over privacy worries and tech glitches. Many in the US have struggled with low numbers of downloads, while the UK recently had the opposite problem as people were deluged with alerts.

Now we’re looking back at how this technology rolled out, especially because it might offer lessons for the next phase of pandemic tech. 

Susan Landau, a Tufts University professor in cybersecurity and computer science, is the author of People Count, a book on how and why contact tracing apps were built. She also published an essay in Science last week arguing that new technology to support public health should be thoroughly vetted for ways that it might add to unfairness and inequities already embedded in society.

“The pandemic will not be the last humans face,” Landau writes, calling for societies to “use and build tools and supporting health care policy” that will protect people’s rights, health, and safety and enable greater health-care equity.

This interview has been condensed and edited for clarity.

What have we learned since the rollout of covid apps, especially about how they could have worked differently or better? 

The technologists who worked on the apps were really careful about making sure to talk to epidemiologists. What they probably didn’t think about enough was: These apps are going to change who gets notified about being potentially exposed to covid. They are going to change the delivery of [public health] services. That’s the conversation that didn’t happen.

For example, if I received an exposure notification last year, I would call my doctor, who’d say, “I want you to get tested for covid.” Maybe I would isolate myself in my bedroom, and my husband would bring me food. Maybe I wouldn’t go to the supermarket. But other than that, not much would change for me. I don’t drive a bus. I’m not a food service worker. For those people, getting an exposure notification is really different. You need to have social services to help support them, which is something public health knows about. 

Susan Landau
Susan Landau
COURTESY PHOTO

In Switzerland, if you get an exposure notification, and if the state says “Yeah, you need to quarantine,” they will ask, “What’s your job? Can you work from home?” And if you say no, the state will come in with some financial support to stay home. That’s putting in social infrastructure to support the exposure notification. Most places did not—the US, for example.

Epidemiologists study how disease spreads. Public health [experts] look at how we take care of people, and they have a different role. 

Are there other ways that the apps could have been designed differently? What would have made them more useful?

I think there’s certainly an argument for having 10% of the apps actually collect location, to be used only for medical purposes to understand the spread of the disease. When I talked to epidemiologists back in May and June 2020, they would say, “But if I can’t tell where it’s spreading, I’m losing what I need to know.” That’s a governance issue by Google and Apple.

There’s also the issue of how efficacious this is. That ties back in with the equity issue. I live in a somewhat rural area, and the closest house to me is several hundred feet away. I’m not going to get a Bluetooth signal from somebody else’s phone that results in an exposure notification. If my bedroom happens to be right against the bedroom of the apartment next door, I could get a whole bunch of exposure notifications if the person next door is ill—the signal can go through wood walls. 

Why did privacy become so important to the designers of contact tracing apps? 

Where you’ve been is really revelatory because it shows things like who you’ve been sleeping with, or whether you stop at the bar after work. It shows whether you go to the church on Thursdays at seven but you don’t ever go to the church any other time, and it turns out Alcoholics Anonymous meets at the church then. For human rights workers and journalists, it’s obvious that tracking who they’ve been with is very dangerous, because it exposes their sources. But even for the rest of us, who you spend time with—the proximity of people—is a very private thing.

“The end user is not an engineer… it’s your uncle. It’s your kid sister. And you want to have people who understand how people use things.”

Other countries use a protocol that includes more location tracking—Singapore, for example.

Singapore said, “We’re not going to use your data for other things.” Then they changed it, and they’re using it for law enforcement purposes. And the app, which started out as voluntary, is now needed to get into office buildings, schools, and so on. There is no choice but for the government to know who you’re spending time with. 

I’m curious about your thoughts on some bigger lessons for building public technology in a crisis.

I work in cybersecurity, and in that field it took us a really long time to understand that there’s a user at the other end, and the user is not an engineer sitting at Sun Microsystems or Google in the security group. It’s your uncle. It’s your kid sister. And you want to have people who understand how people use things. But it’s not something that engineers are trained to do—it’s something that the public health people or the social scientists do, and those people have to be an integral part of the solution. 

I want a public health person to say to me, “This population is going to react to the app this way.” For example, the Cambodian population that’s in the United States—many of them were traumatized by government. They’re going to respond one way. The immigrant population that comes from India may respond in a different way. In my book, I talk about the Apache reservation in eastern Arizona, which took into account the social factor. It’s a public health measure—not a contact tracing measure—to ask about someone’s other set of grandparents.

Digital vaccine apps and credentials are now rolling out in a wide array of states and countries, and being required by private entities. For those to work, who should be in the room as they’re designed?

You want the technologists who have thought about identity management and people who think about privacy. How do you reveal one piece of information without revealing everything else? 

And you want to get people who really appreciate the privacy issues of disease. What jumps to mind is the epidemiologists and contact tracers who worked with AIDS, which was really an explosive issue back in the 1980s. You want them because they understand public health, and they really understand the importance of the privacy issue. They get it in their gut. 

It’s getting smart people from both sides in the room. They have to be smart, because it’s hard to understand somebody else’s language. And both groups have to understand what the other is saying, but they also have to be confident enough that they’re willing to ask lots of questions. It’s the really understanding that’s hard.

This story is part of the Pandemic Technology Project, supported by the Rockefeller Foundation.

A horrifying new AI app swaps women into porn videos with a click

Update: As of September 14, a day after this story published, Y posted a new notice saying it is now unavailable. We will continue to monitor the site for more changes.

The website is eye-catching for its simplicity. Against a white backdrop, a giant blue button invites visitors to upload a picture of a face. Below the button, four AI-generated faces allow you to test the service. Above it, the tag line boldly proclaims the purpose: turn anyone into a porn star by using deepfake technology to swap the person’s face into an adult video. All it requires is the picture and the push of a button.

MIT Technology Review has chosen not to name the service, which we will call Y, or use any direct quotes and screenshots of its contents, to avoid driving traffic to the site. It was discovered and brought to our attention by deepfake researcher Henry Ajder, who has been tracking the evolution and rise of synthetic media online.

For now, Y exists in relative obscurity, with a small user base actively giving the creator development feedback in online forums. But researchers have feared that an app like this would emerge, breaching an ethical line no other service has crossed before.

From the beginning, deepfakes, or AI-generated synthetic media, have primarily been used to create pornographic representations of women, who often find this psychologically devastating. The original Reddit creator who popularized the technology face-swapped female celebrities’ faces into porn videos. To this day, the research company Sensity AI estimates, between 90% and 95% of all online deepfake videos are nonconsensual porn, and around 90% of those feature women.

As the technology has advanced, numerous easy-to-use no-code tools have also emerged, allowing users to “strip” the clothes off female bodies in images. Many of these services have since been forced offline, but the code still exists in open-source repositories and has continued to resurface in new forms. The latest such site received over 6.7 million visits in August, according to the researcher Genevieve Oh, who discovered it. It has yet to be taken offline.

There have been other single-photo face-swapping apps, like ZAO or ReFace, that place users into selected scenes from mainstream movies or pop videos. But as the first dedicated pornographic face-swapping app, Y takes this to a new level. It’s “tailor-made” to create pornographic images of people without their consent, says Adam Dodge, the founder of EndTAB, a nonprofit that educates people about technology-enabled abuse. This makes it easier for the creators to improve the technology for this specific use case and entices people who otherwise wouldn’t have thought about creating deepfake porn. “Anytime you specialize like that, it creates a new corner of the internet that will draw in new users,” Dodge says.

Y is incredibly easy to use. Once a user uploads a photo of a face, the site opens up a library of porn videos. The vast majority feature women, though a small handful also feature men, mostly in gay porn. A user can then select any video to generate a preview of the face-swapped result within seconds—and pay to download the full version.

The results are far from perfect. Many of the face swaps are obviously fake, with the faces shimmering and distorting as they turn different angles. But to a casual observer, some are subtle enough to pass, and the trajectory of deepfakes has already shown how quickly they can become indistinguishable from reality. Some experts argue that the quality of the deepfake also doesn’t really matter because the psychological toll on victims can be the same either way. And many members of the public remain unaware that such technology exists, so even low-quality face swaps can be capable of fooling people.

Y bills itself as a safe and responsible tool for exploring sexual fantasies. The language on the site encourages users to upload their own face. But nothing prevents them from uploading other people’s faces, and comments on online forums suggest that users have already been doing just that.

The consequences for women and girls targeted by such activity can be crushing. At a psychological level, these videos can feel as violating as revenge porn—real intimate videos filmed or released without consent. “This kind of abuse—where people misrepresent your identity, name, reputation, and alter it in such violating ways—shatters you to the core,” says Noelle Martin, an Australian activist who has been targeted by a deepfake porn campaign.

To this day, I’ve never been successful fully in getting any of the images taken down. Forever, that will be out there. No matter what I do.

Noelle Martin, an Australian activist

And the repercussions can stay with victims for life. The images and videos are difficult to remove from the internet, and new material can be created at any time. “It affects your interpersonal relations; it affects you with getting jobs. Every single job interview you ever go for, this might be brought up. Potential romantic relationships,” Martin says. “To this day, I’ve never been successful fully in getting any of the images taken down. Forever, that will be out there. No matter what I do.”

Sometimes it’s even more complicated than revenge porn. Because the content is not real, women can doubt whether they deserve to feel traumatized and whether they should report it, says Dodge. “If somebody is wrestling with whether they’re even really a victim, it impairs their ability to recover,” he says.

Nonconsensual deepfake porn can also have economic and career impacts. Rana Ayyub, an Indian journalist who became a victim of a deepfake porn campaign, received such intense online harassment in its aftermath that she had to minimize her online presence and thus the public profile required to do her work. Helen Mort, a UK-based poet and broadcaster who previously shared her story with MIT Technology Review, said she felt pressure to do the same after discovering that photos of her had been stolen from private social media accounts to create fake nudes.

The Revenge Porn Helpline funded by the UK government recently received a case from a teacher who lost her job after deepfake pornographic images of her were circulated on social media and brought to her school’s attention, says Sophie Mortimer, who manages the service. “It’s getting worse, not better,” Dodge says. “More women are being targeted this way.”

Y’s option to create deepfake gay porn, though limited, poses an additional threat to men in countries where homosexuality is criminalized, says Ajder. This is the case in 71 jurisdictions globally, 11 of which punish the offense by death.

Ajder, who has discovered numerous deepfake porn apps in the last few years, says he has attempted to contact Y’s hosting service and force it offline. But he’s pessimistic about preventing similar tools from being created. Already, another site has popped up that seems to be attempting the same thing. He thinks banning such content from social media platforms, and perhaps even making their creation or consumption illegal, would prove a more sustainable solution. “That means that these websites are treated in the same way as dark web material,” he says. “Even if it gets driven underground, at least it puts that out of the eyes of everyday people.”

Y did not respond to multiple requests for comment at the press email listed on its site. The registration information associated with the domain is also blocked by the privacy service Withheld for Privacy. On August 17, after MIT Technology Review made a third attempt to reach the creator, the site put up a notice on its homepage saying it’s no longer available to new users. As of September 12, the notice was still there.

There’s a gig-worker-sized hole in Biden’s vaccine mandate plan

The news: President Joe Biden has signed an executive order that will require millions of American workers to get vaccinated against covid-19. The order mandates all companies with more than 100 workers to require employees to be vaccinated or get tested weekly. Employers will have to provide paid time off for employees to get their shots. The order also covers most health-care workers and workers at nursing homes, plus people who work for the federal government or a government contractor. 

Government workers could face disciplinary action if they refuse. If properly enacted, the mandate will reach about two-thirds of American workers (although it’s unclear how many of them have already been vaccinated). It’s part of a multi-pronged plan to try to get the pandemic under control in the US, which includes measures like requiring employers to offer paid time off for vaccination and ordering large, indoor venues to require proof of vaccination or a negative test. 

Falling through the gaps: As we’ve written before, paid time off may be one of the best vaccine incentives. But again, Biden’s order only covers larger employers of 100 or more. Gig workers aren’t mentioned in the policy, even though they stand to benefit the most from policies that would offset the loss of hourly income, and they make up a significant and growing proportion of the US workforce. The mandates also won’t help people who are out of work, which adds up to 8.4 million Americans. 

Why he’s doing it: Biden has previously bet that incentives like money, food, and beer might help to nudge people to get vaccinated, yet 80 million people still haven’t got their shots. Just 177 million Americans are fully vaccinated, which is just 62.5% of those eligible, and only 52% of the total population. In short, he’s tried the carrot approach—now he’s opting for the stick. 

With the US still reporting about 150,000 new cases a day, Biden seems to be placing responsibility for the dire situation on the unvaccinated, and he’s betting that many people in America feel the same way. Addressing unvaccinated people directly, he said: “We’ve been patient. But our patience is wearing thin. And your refusal has cost all of us.” 

Lessons learned: Biden’s plan also includes measures to make it easier for people to find places to get vaccinated. In the initial vaccine roll-out in December, many Americans were confused about available vaccination sites and supplies. But now, when the booster shots are approved, individuals will be able to find a vaccination site at Vaccines.gov, including what vaccines are available at each site and, for many sites, what appointments are open. A toll-free number will also be available in over 150 languages.

NASA is going to slam a spacecraft into an asteroid. Things might get chaotic.

The dinosaurs didn’t have a space program, so when an asteroid headed toward Earth with their name on it 65 million years ago, they had no warning and no way to defend themselves. We know how that turned out.

Humans are, understandably, keen to avoid the same fate. Later this year, NASA will launch a mission to practice how we might deflect a future Earthbound asteroid. The Double Asteroid Redirection Test (DART) is targeted to launch as soon as November 24 (or as late as February 2022) and will take a year to reach its target: Dimorphos, a stadium-size asteroid that is orbiting a much larger asteroid called Didymos.

The plan is to hit Dimorphos at a speed of 6.5 kilometers per second with the car-sized DART spacecraft, which weighs about a third of a ton, changing its almost-12 hour orbit around Didymos by a few minutes. A European Space Agency mission arriving five years later, called Hera, will check to see if the mission worked. The impact will have only a small effect on the orbit, but that should be enough to deflect an asteroid from Earth’s path in the future—so long as we hit it far enough in advance. “We’re doing this to have the ability to prevent a truly catastrophic natural disaster,” says Tom Statler, DART program scientist at NASA headquarters in Washington, DC.

The potential changes to the orbit of Dimorphos have been well studied. But until now we haven’t known much about what will happen to Dimorphos itself after the impact. A paper published in the journal Icarus documents the first simulations to find out. 

Led by Harrison Agrusa from the University of Maryland, researchers modeled how much DART might change the spin or rotation of Dimorphos by calculating how the momentum of the impact will alter the asteroid’s roll, pitch, and yaw. The results could be dramatic. “It could start tumbling and enter a chaotic state,” says Agrusa. “This was really quite a big surprise.”

The unexpected spinning poses some interesting challenges. It will add to the difficulty of landing on the asteroid, which ESA hopes to attempt with two small spacecraft on its Hera mission. It could also make future attempts to deflect an Earthbound asteroid more complicated, as any rotation can affect an asteroid’s path through space.

When DART slams into Dimorphos, the energy of the impact will be comparable to three tons of TNT exploding, sending thousands of pieces of debris spewing into space. Statler describes it as a golf cart traveling at 15,000 miles an hour smashing into the side of a football stadium. The force of the impact will not cause any immediate changes to Dimorphos’s spin, but within days things will start to change, according to Agrusa and his team.

Soon, Dimorphos will start to wobble very slightly. This wobble will grow and grow as the momentum from the impact throws the rotation of Dimorphos out of balance, with no friction in the vacuum of space to slow it down. Dimorphos may start to spin one way and another. It may start to rotate along its long axis, like a rotisserie. To an observer on Didymos looking into the sky, this seemingly sedate satellite will take on a new form—starting to swing wildly back and forth, its previously hidden sides now coming into view.

Within weeks, Dimorphos could spin so much that it enters a chaotic tumbling state where it is spinning uncontrollably around its axes. In more extreme scenarios the tidal lock with Didymos could break completely and Dimorphos might start flipping “head over heels,” says Agrusa.

Exactly what will happen will depend on a few things. Dimorphos’s shape will play an important part—if it’s more elongated rather than spherical, it’ll spin more chaotically. Radar observations so far suggest it is elongated, but we won’t know until just hours before DART hits, when it gets its first views of its small target. 

The location of the impact will play a part too. DART will be aiming for the center of Dimorphos, the goal being to impart the greatest amount of force so as to alter its orbit, but the more off-center it is, the more chaotic the resulting spin will be. In most scenarios, however, Dimorphos should be dramatically swinging back and forth or tumbling in many directions within weeks.

When ESA’s Hera mission arrives five years later, the scene could be quite dramatic, with Dimorphos spinning wildly in its orbit around Didymos as a result of humanity’s influence. It will likely be decades or even centuries before the gravitational tug of Didymos returns Dimorphos to its original, presumed tidally locked, state. “The possibility that Hera might find Dimorphos in a chaotic tumbling state is really interesting and really exciting,” says Statler.

Hera’s arrival will be the only way we’ll know for sure what has happened to the spin of Dimorphos, as DART will be destroyed by the impact and Dimorphos is too small to be seen in detail from Earth. A small Italian-made satellite called LICIACube will be deployed prior to the impact and will take images during the event as it whizzes past, but it will only do so for a few minutes—not long enough to watch the wobbling take hold.

Hera is also planning to deploy two smaller satellites that will attempt to land on the surface of Dimorphos. The tumbling motion is not expected to hamper these efforts, but it could make them more difficult. Without proper planning for the chaotic rotation, the two small vehicles could bounce around and not quite end up where scientists want. “Landing on such a small body is hard anyway,” says Patrick Michel from the French National Centre for Scientific Research (CNRS), one of the mission leads on Hera and a coauthor on Agrusa’s paper. “But [this] doesn’t make it easier.”

The tumbling motion of Dimorphos is not expected to affect DART’s dress rehearsal for one day saving Earth, nor will it pose any danger to us on the planet, but there could be some scientifically useful information from the event. The spin state of asteroids could affect other properties, such as how much sunlight they reflect, which can have an impact on their trajectories—possibly something to take into account on a future asteroid deflection mission. “It’s not as simple as just crashing a spacecraft into the asteroid,” says astronomer Paul Wiegert of the University of Western Ontario. “There’s a lot of physics you need to understand.” 

Observing the system for years, decades, or even centuries will also give us an unprecedented opportunity to see how a binary asteroid system evolves after experiencing an impact like this. Hera alone could give us an indication of how strong the tidal effect is in returning the system to its normal state, helping us understand the gravitational relationship between two asteroids such as these. 

We’re about to see what happens when we slam a golf cart into a stadium. The results could be, well, rather chaotic. “It’s very cool,” says Federica Spoto, who studies asteroid dynamics at the Harvard-Smithsonian Center for Astrophysics in Massachusetts. “We’re really modifying a system.”