The Download: AI to predict ice, and healthcare censorship in China

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Deep learning can almost perfectly predict how ice forms

The news: Researchers have used deep learning to model more precisely than ever before how ice crystals form in the atmosphere. Their paper, published this week in PNAS, hints at the potential to significantly increase the accuracy of weather and climate forecasting.

How they did it: The researchers used deep learning to predict how atoms and molecules behave. First, models were trained on small-scale simulations of water molecules to help them predict how electrons in atoms interact. The models then replicated those interactions on a larger scale, with more atoms and molecules. It’s this ability to precisely simulate electron interactions that allowed the team to accurately predict physical and chemical behavior.

Why it matters: If researchers could model how ice forms more accurately, it could give a big boost to weather prediction overall, especially those involving whether and how much it’s likely to rain or snow. It could also aid climate forecasting by improving the ability to model clouds, which affect the planet’s temperature in complex ways. Read the full story.

—Tammy Xu

China has censored a top health information platform

China has censored DXY, the country’s leading health information platform and online community for Chinese doctors. On August 9, DXY fell silent across its social media channels, where it boasts over 80 million followers. While Weibo offered the vague explanation that the platform’s five channels had violated “relevant laws and regulations,”  Nikkei Asia reported that the order came from regulators and won’t end without official approval.

In the increasingly polarized social media environment in China, healthcare is becoming a target for controversy. The suspension has met with a gleeful social reaction among nationalist bloggers, who accuse DXY of receiving foreign funding, bashing traditional Chinese medicine, and criticizing China’s health-care system, illustrating just how politicized health topics have become. Read the full story.

—Zeyi Yang

Podcast: How to craft effective AI policy

When are policy makers going to catch up with the speed of AI innovation? The latest episode of our award-winning podcast, In Machines We Trust, examines what it will take to make effective AI policy and equitable access to technology. It was recorded before a live audience at MIT Technology Review’s annual AI conference, EmTech Digital. Listen to it for yourself.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The FTC is planning to regulate how Big Tech collects data
It has surveillance advertising and targeting algorithms in its sights. (WSJ $)
+ Meta defended injecting code into sites to track users across the web. (The Guardian)
+ Google has been fined in Australia for misleading consumers about location data collection. (TechCrunch)

2 Predicting the US climate bill’s effects is harder than you might think
We don’t know how effectively or quickly its tax credits will cut emissions. (MIT Technology Review)
+ Emissions modeling is not an exact science. (Scientific American $)
+ The bill could have major ramifications for electric vehicle manufacturers. (NYT $)

3 A bioengineered cornea can restore sight to blind people
At a fraction of the cost of traditional cornea transplants from humans. (MIT Technology Review)

4 This is why thinking deeply makes you tired
Feeling exhausted is your brain’s way of balancing out chemical changes. (Economist $)
+ Self-taught AI could shine a light on how the brain works. (Quanta)

5 Micro-robots could replace your toothbrush 🦷
By clearing biofilm from your teeth and wriggling between them. (Neo.Life)
+ Drilling underground tunnels is another task robots might take on someday. (Wired $)

6 Caste discrimination is overshadowing Silicon Valley
India’s discriminatory caste system is affecting tech workers at Google and other tech companies across the world. (New Yorker $)

7 How Roomba took over our homes
And became the most recognizable consumer robot in the process. (WSJ $)
+ Its new owner, Amazon, wants to make a TV show of Ring doorbell footage. (Ars Technica)

8 The Nokia ringtones of the 2000s deserve some respect 🎶
Believe it or not, they’re a neglected part of electronic music history. (The Verge)

9 The origins of the Dark Brandon meme are complicated
Democrats have tried, with moderate success, to reclaim it from facist corners of the internet. (Vox)
+ Its rise is another awkward moment in the meme wars. (The Atlantic $)
+ Twitter is swarming with bizarre theories about Ivana Trump’s casket. (Motherboard)

10 TikTokers are collecting the tags from their expensive athleisure
Just…why? (Motherboard)

Quote of the day

“We just see the workforce without the lens of people who had been in it pre-covid.”

—Ginsey Stephenson, 23, tells the Washington Post why younger workers are less prepared to compromise on flexible working from home arrangements.

The big story

Dementia content gets billions of views on TikTok. Whose story does it tell?

February 2022

A dementia diagnosis can instantly change how the world sees someone. The internet, at its best, can help make the reality of living with dementia more visible. And for some, the internet is the only place they can connect with others going through the same thing. But among the popular #Dementia hashtag on TikTok, it’s easy to find viral videos in which care partners mock dementia patients and escalate arguments with them on camera.

Creators have not settled on the ethics of making public content about someone who may no longer be able to consent to being filmed. Meanwhile, people who are themselves living with dementia are raising their own questions about consent, and emphasizing the harms caused by viral content that perpetuates stereotypes or misrepresents the full nature of the condition. Read the full story.

—Abby Ohlheiser

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Polynesia’s history of tattooing spans back more than 3,000 years, which is pretty incredible.
+ Would you decorate your home with circus-inspired elements?
+ Panic over—a giant tortoise on a mission for love is set to make a full recovery after getting a bit too close to a train.
+ This teddy bear is a straight up chiller.
+ Anonymous messaging apps tend to be more trouble than they’re worth.

Predicting the climate bill’s effects is harder than you might think

The Inflation Reduction Act of 2022, which marks the US’s largest-ever investment in climate and clean energy at nearly $400 billion, is a clear environmental victory. But just how far that funding will go in cutting carbon emissions is yet to be seen, and the results are far less certain that some have claimed. 

Estimates predict that the bill’s mix of tax credits, grants, and loan programs could result in up to a billion tons in annual emissions reductions by 2030. According to several analyses, this would mean the US could reduce greenhouse-gas emissions by up to 40% from their peak levels in 2005. 

While models have converged on this 40% reduction as a common estimate, some economists stress that tax credits can have uncertain effects, and predicting the actual future of emissions reductions is tricky. 

One reason for the uncertainty is that the bill relies largely on financial incentives rather than regulations or mandates, so its effectiveness depends on consumer choices and business decision-making. Both of those can be unpredictable.

For example, most of the emissions cuts are expected to come from the power industry, as the bill provides generous incentives for companies to build more renewable electricity. But local opposition to new construction, and other barriers to clean-energy projects such as solar and wind farms, could halt progress, discouraging investment and slowing deployment. 

Free money

Tax credits are at the heart of the Inflation Reduction Act’s strategy for cutting greenhouse-gas emissions.

Some of the tax credits in the bill are for individuals—there’s about $35 billion for people looking to upgrade and electrify their homes, as well as a $7,500 credit for purchasing new electric vehicles and a $4,000 credit for used ones. But how many people will decide to buy these cars will likely depend on the health of the economy, how consistent the supply is, and whether they find them appealing. 

Other tax credits are for companies and public utilities building clean-energy projects. This is a huge chunk of the bill—roughly $160 billion, including programs designed to keep nuclear power plants running. 

These credits come in two forms. Investment tax credits are calculated as a fraction of the initial investment to build a project, starting at 30%; if the plant costs a billion dollars, the builders get a $300 million credit. Production tax credits, on the other hand, are based on a plant’s output, paying out a couple of cents for every kilowatt-hour of electricity produced. 

A company looking to build a clean-energy project can choose between a production and an investment tax credit. Startups without a tax burden can sell their tax credits to another company that has one, so they can still cash in. 

In the power sector, tax credits have been a long-standing strategy—wind and solar projects have qualified for credits for years, and the funding has been credited with helping those technologies succeed, says Karen Palmer, an energy economist at Resources for the Future. The new bill broadens these credits to include other clean-energy technologies.

The logic behind tax credits is simple. “If you make clean energy cheaper, people are going to produce more of it,” says Nat Keohane, an economist and the president of the Center for Climate and Energy Solutions.

Uncertainties

Economists agree that the new tax credits will reduce emissions—the question is how much. Early estimates come from three major modeling groups: Princeton’s REPEAT Project, Rhodium Group, and Energy Innovation. All agree that the new legislation should help get the US to about a 40% reduction in emissions by 2030.

“People don’t necessarily always do what is, on paper, the most economic.”

Robbie Orvis

In the REPEAT group’s estimate, almost 40% of the emissions reductions resulting from the bill will come from the power sector, largely driven by the huge tax credit programs for clean energy. An additional 30% will be in transportation, the group predicts, and another 15% in heavy industry.

Not all that expected progress is because of the bill. The US is emitting about 15% less today than it did in 2005, and with policies in place before the bill passed, the country was already on track for reductions of about 25% below 2005 levels by the end of the decade.  

And economists are quick to point out that while models can give a broad sense of how policy could shift future emissions, there are plenty of ways that picture could change.

In a report last year, Rhodium Group said the US was on track for emissions reductions of 17 to 30% by 2030. This year, in its June report, that estimate crept up to between 24 and 35%.

The difference in the model results wasn’t because of any specific new policy, but largely because of how fossil-fuel prices shot up after Russia invaded Ukraine and how the US government started responding to inflation concerns, says Ben King, associate director for Rhodium Group’s climate and energy program. Any factors that shift the price of fossil fuels will likely impact emissions predictions.

Human decision-making can also cause models and reality to misalign. “People don’t necessarily always do what is, on paper, the most economic,” says Robbie Orvis, who leads the energy policy solutions program at Energy Innovation.

This is a common issue for consumer tax credits, like those for electric vehicles or home energy efficiency upgrades. Often people don’t have the information or funds needed to take advantage of tax credits.

Likewise, there are no assurances that credits in the power sectors will have the impact that modelers expect. Finding sites for new power projects and getting permits for them can be challenging, potentially derailing progress. Some of this friction is factored into the models, Orvis says. But there’s still potential for more challenges than modelers expect.

Not enough

Putting too much stock in results from models can be problematic, says James Bushnell, an economist at the University of California, Davis. For one thing, models could overestimate how much behavior change is because of tax credits. Some of the projects that are claiming tax credits would probably have been built anyway, Bushnell says, especially solar and wind installations, which are already becoming more widespread and cheaper to build.

Still, whether or not the bill meets the expectations of the modelers, it’s a step forward in providing climate-friendly incentives, since it replaces solar- and wind-specific credits with broader clean-energy credits that will be more flexible for developers in choosing which technologies to deploy.

Another positive of the legislation is all its long-term investments, whose potential impacts aren’t fully captured in the economic models. The bill includes money for research and development of new technologies like direct air capture and clean hydrogen, which are still unproven but could have major impacts on emissions in the coming decades if they prove to be efficient and practical. 

Whatever the effectiveness of the Inflation Reduction Act, however, it’s clear that more climate action is still needed to meet emissions goals in 2030 and beyond. Indeed, even if the predictions of the modelers are correct, the bill is still not sufficient for the US to meet its stated goals under the Paris agreement of cutting emissions to half of 2005 levels by 2030.

The path ahead for US climate action isn’t as certain as some might wish it were. But with the Inflation Reduction Act, the country has taken a big step. Exactly how big is still an open question. 

Deep learning can almost perfectly predict how ice forms

Researchers have used deep learning to model more precisely than ever before how ice crystals form in the atmosphere. Their paper, published this week in PNAS, hints at the potential to significantly increase the accuracy of weather and climate forecasting.

The researchers used deep learning to predict how atoms and molecules behave. First, models were trained on small-scale simulations of 64 water molecules to help them predict how electrons in atoms interact. The models then replicated those interactions on a larger scale, with more atoms and molecules. It’s this ability to precisely simulate electron interactions that allowed the team to accurately predict physical and chemical behavior. 

“The properties of matter emerge from how electrons behave,” says Pablo Piaggi, a research fellow at Princeton University and the lead author on the study. “Simulating explicitly what happens at that level is a way to capture much more rich physical phenomena.”

It’s the first time this method has been used to model something as complex as the formation of ice crystals, also known as ice nucleation. This is one of the first steps in the formation of clouds, which is where all precipitation comes from. 

Xiaohong Liu, a professor of atmospheric sciences at Texas A&M University who was not involved in the study, says half of all precipitation events—whether snow or rain or sleet—begin as ice crystals, which then grow larger and result in precipitation. If researchers could model ice nucleation more accurately, it could give a big boost to weather prediction overall.

Ice nucleation is currently predicted on the basis of laboratory experiments. Researchers collect data on ice formation under different laboratory conditions, and that data is fed into weather prediction models under similar real-world conditions. This method works well enough sometimes, but often it ends up being inaccurate because of the sheer number of variables involved in actual weather conditions. If even a few factors vary between the lab and the real world, the results can be quite different.

“Your data is only valid for a certain region, temperature, or kind of laboratory setting,” Liu says.

Predicting ice nucleation from the way electrons interact is much more precise, but it’s also very computationally expensive. It requires researchers to model at least 4,000 to 100,000 water molecules, and even on supercomputers, such a simulation could take years to run. Even that would only be able to model the interactions for 100 picoseconds, or 10-10 seconds—not long enough to observe the ice nucleation process.

Using deep learning, however, researchers were able to run the calculations in just 10 days. The time duration was also 1,000 times longer—still a fraction of a second, but just enough to see nucleation.

Of course, more accurate models of ice nucleation alone won’t make forecasting perfect, says Liu, since it is only a small though critical component of weather modeling. Other aspects are also important—understanding how water droplets and ice crystals grow, for example, and how they move and interact together under different conditions.

Still, the ability to more accurately model how ice crystals form in the atmosphere would significantly improve weather predictions, especially those involving whether and how much it’s likely to rain or snow. It could also aid climate forecasting by improving the ability to model clouds, which affect the planet’s temperature in complex ways.

Piaggi says future research could model ice nucleation when there are substances like smoke in the air, potentially improving the accuracy of models even more. Because of deep-learning techniques, it’s now possible to use electron interactions to model larger systems for longer periods of time.

“That has opened essentially a new field,” Piaggi says. “It’s already having and will have an even greater role in simulations in chemistry and in our simulations of materials.”

How to craft effective AI policy

A conversation about equity and what it takes to make effective AI policy. This episode was taped before a live audience at MIT Technology Review’s annual AI conference, EmTech Digital.

We Meet:

  • Nicol Turner Lee, director of the Center for Technology at the Brookings Institution
  • Anthony Green, producer of the In Machines We Trust podcast

Credits:

This episode was created by Jennifer Strong, Anthony Green, Erin Underwood and Emma Cillekens. It was edited by Michael Reilly, directed by Laird Nolan and mixed by Garret Lang. Episode art by Stephanie Arnett. Cover art by Eric Mongeon. Special thanks this week to Amy Lammers and Brian Bryson.

Full Transcript:

[PREROLL]

[TR ID]

Jennifer Strong: The applications of artificial intelligence are so embedded in our everyday lives it’s easy to forget it’s there… But these systems, like ones powering Instagram filters or the price of a car ride home… can rely on pre-existing datasets that fail to paint a complete picture of consumers. 

It means people become outliers in that data – often the same people who’ve historically been marginalized. 

It’s why face recognition technologies are least accurate on women of color, and why ride-share services can actually be more expensive in low-income neighborhoods. So, how do we stop this from happening?

Well would you believe a quote from Harry Potter and his wizarding world… might create a good starting point for this conversation?

I’m Jennifer Strong and this episode, our producer Anthony Green brings you a conversation about equity from MIT Technology Review’s A-I conference, EmTech Digital. We’ll hear from Nicol Turner Lee—the director of the center for technology at the Brookings Institution—about what it takes to make effective AI policy.

[EPISODE  IN: 

Anthony Green: There’s a quote from Harry Potter of all places. 

Nicol Turner Lee: Oh Lord, I, I, I haven’t seen the Harry Potter episodes since my kids were little so I’ll try. 

[Laughter]

Anthony Green: Oh man. Uh, it’s a pretty good one. No, it’s, it’s just kind of stuck with me over the years. I’m honestly not even otherwise a big fan, but, um, the quote goes, there will be a time where we must choose between what is right and what is easy and it feels like that applies pretty squarely to how companies design these systems. Right. So I guess my question is how can policy makers, right, start to push the needle in the right direction when it comes to favorable outcomes for AI in decision making? 

Nicol Turner Lee: Well, that’s a great question. And again, thank you for having me. You may be wondering why I’m sitting here. I’m a sociologist. I’ve had the privilege of being on this stage for a couple of conferences here at MIT. But I got into this… And before I answer your question, because I think the quote that you’re referencing points to much of what my colleagues have talked about, which are the sociotechnical implications of these systems.

Anthony Green: Mm-hmm.

Nicol Turner Lee: So I’ve been doing this for about 30 years. And part of the challenge that we’ve had is that we’ve not seen equitable access to technology. And as we think about these emerging sophisticated systems, to your point, we have to think about the extent to which they have effects on regular everyday people, particularly people who are already marginalized. Already vulnerable in our society. So that quote has a lot of meaning because if we’re not careful, the technology in and of itself will sort of accelerate, I think, some of the progress that we’ve made when it comes to equity and civil rights. 

Anthony Green: Yeah. 

Nicol Turner Lee: Um, I’m gonna date myself for just a moment. I know I look a lot younger. When I was growing up I used to run home and watch the Jetsons, right. There were two cartoons. I watched Fred Flinstone, which if you all remember, he rode around on a car with rocks and I watched the Jetsons..

Anthony Green: Powered with his feet. 

Nicol Turner Lee: I know, right! You’re too young to know about Fred Flinstone. 

Anthony Green: Oh, Boomerang. 

But, but if you notice. You know, Fred Flinstone is archaic. Right? 

Anthony Green: Right.

Nicol Turner Lee: The, the rocks as wheels doesn’t work. 

Anthony Green: Yeah. 

Nicol Turner Lee: The Jetsons is actually realized. And part of the challenge and the reason that I have interest in this work outside of my, you know, PhD in sociology and my interest in technology is that these systems now are so much more generally purposed that they impact people when they are contextualized in environments.

And that’s where I think we have to have more conversations that point to your question. So roundabout way. But I think it’s really important that we have these conversations now, before the technology accelerates itself.

Anthony Green: Hundred percent. And I mean, you know, all of that said, right, policy making alone isn’t going to be the only solution needed to resolve these issues. So I would love it if you can speak to how accountability, specifically on the part of industry, comes into play as well. 

Nicol Turner Lee: Well, the problem with policy makers is that we’re not necessarily technologists. And so we can see a problem and we actually sort of see that problem in its outcomes. 

Anthony Green: Yeah. 

Nicol Turner Lee: So I don’t think there’s any policy maker, or very few outside of people like Ro Khanna and others, right, who actually know what it’s like to be in, in the tech space. 

Anthony Green: Sure. 

Nicol Turner Lee: That understands how these outcomes occur. They don’t understand what’s underneath the hood. Or as people say, I’m trying to move away from this language. It’s not really a black box. Right. It’s just a box.

Anthony Green: Right.

Nicol Turner Lee: Because there’s some, uh, judgments that come with calling it a black box. But when you think about policy and those outcomes you have to say to yourself, how do policy makers sort of take an organic iterative model and then legislate or regulate it? And that’s where people like me who are in the social sciences, I think, come in and have much more conversation on what they should be looking for. Um, so the accountability there is hard.

Anthony Green: Yeah.

Nicol Turner Lee: Because no one is talking the same language as many of you in this room, right. The technologists are sort of rushing to market. I call it permissionless forgiveness. Uh, as my colleagues at the center for technology innovation, Tom Wheeler has that great phrase, “build it and then break it and then come back and fix it.” Well, guess what happens? That’s permissionless forgiveness. Cuz what happens? We say we’re sorry when people have foreclosed, uh, mortgage rates, are in criminal justice systems where they’re detained longer because these models dictate those predictions.

Anthony Green: Right.

Nicol Turner Lee: So policy makers have not quite, Anthony, caught up to the speed of innovation. And we’ve said that for decades, but it’s actually true. 

Anthony Green: Absolutely. I mean, you’ve referred to this issue in the past as a civil and human rights issue. 

Nicol Turner Lee: It is. It is. 

Anthony Green: Right. So, I mean, can you kind of like expand on that and how that’s kind of shaped your conversations about policy?

Nicol Turner Lee: You know, it’s shaped my conversations from the standpoint of this. I, I, you know, shameless plug, I have a book coming out on the US digital divide so I’ve been very interested. I call it, uh, Digitally Invisible, how the internet is creating the new underclass. And it’s really about the digital divide going past the binary construction of who’s online, who’s not, to really thinking about what are the impacts when you are not connected. 

Anthony Green: Right. 

Nicol Turner Lee: And how do these emerging technologies impact you? So to your point, I call it a civil rights issue because what the pandemic demonstrated is that without internet access, you were actually not able to get the same opportunities as everybody else. You could not register for your vaccine. You could not communicate with your friends and family. Fifty-million school aged kids sent home, 15 to 16 million of them could not learn. And now we’re seeing the effects of that. 

Anthony Green: Yeah. 

Nicol Turner Lee: And so when we think about artificial intelligence systems that now have replaced what I call the death of analog. Replace, uh, you know, how we used to do things in person we’re now seeing, in a civil rights age, laws that are being violated. And that.. in ways that I, I don’t necessarily attribute to the malfeasance of technologists. But what they’re doing is they’re foreclosing on opportunities that people have fought hard for.

Anthony Green: Sure. 

Nicol Turner Lee: 2016 election. When we had foreign operatives come in and manipulate the content that was available to voters. That was a form of voter suppression. 

Anthony Green: Right.

Nicol Turner Lee: And there was no place that those folks could go to like the Supreme Court or Congress to say my vote was just taken away based on the deep neural networks that were associated with what they were seeing.

Anthony Green: Yeah. 

Nicol Turner Lee: Or the misinformation around polling. We’re now at a state… when you are in a city like Boston and an Uber driver doesn’t pick you up because he sees your face in the profile. Where do you go for the type of, um, you know, the benefits of, of the civil rights regime that we have that was not based on a digital atmosphere? So part of my work at Brookings has been how do we look at the flexibility and agility of these systems to apply to emerging technologies. And we have no simple answer because these rules were not necessarily developed , you know, in the 21st century.

Anthony Green: Right.

Nicol Turner Lee: They were developed when my grandfather told me how he walked to school with the same pair of shoes, right. Where the bottom was out because he wanted an education. We don’t have that today. And I think it’s worth a conversation as these technologies become more ubiquitous. How are we developing not just inclusive and equitable AI but legally compliant AI? AI that makes sense that people feel that they have some retribution for that malfeasance. So I’ll talk a little bit about some of the work we’re doing on there, but I think, you know, there’s a cadre of individuals like myself, some of them here at MIT, that are really trying to figure out how do we go back and make people accountable to the civil and human liberties of folks and not allow the technology to be the fall person when it comes to, you know, why things wreck havoc or go wrong. 

Anthony Green: Don’t blame the robots. 

Nicol Turner Lee: You know! I tell people robots do not discriminate. I’m sorry. You know, we do and, and it’s something to be said about that. We start looking at civil rights. 

Anthony Green: I’m gonna go to the audience. Anyone got a question? 

Rene, audience member: Thank you so much, Renee from Sao Paulo, Brazil.

Nicol Turner Lee: Hey!

Rene, audience member: There is a common theme on these last presentations. It’s about invisibility. 

Nicol Turner Lee: Yes!

Rene, audience member: There are so many ways to be invisible. If, if you have the wrong badge you are invisible, like Harry Potter. If you are too old, if you have the wrong kind of skin. And there’s one very interesting thing. When we talk, we talk about data and AI. AI is proposing things about data that are available. 

Nicol Turner Lee: Yeah. 

Rene, audience member: But there are data that are completely invisible about people who are invisible. So what kind of solutions are we building if you are basing on data.. based on data about all, always the same people. How do we bring visibility to everybody?

Yes!

Rene, audience member: So, thank you so much. 

Nicol Turner Lee: No, I love that question. Can I jump right in on this one? 

Anthony Green: Go for it. 

Nicol Turner Lee: You know, uh, my colleague and friend Renee Cummings, who is the AI, uh, scientist in residence at University of Virginia. She introduced me to, a few months ago and we did a podcast where she was featured, this concept of what’s called data trauma.

Anthony Green: Mmmm.

Nicol Turner Lee: And I wanna sort of walk you through this because it blew me away when I began to think about the implications and it goes to Renee’s question. What does it mean, you know, when we talk about AI, we often talk about the problem development, the data that we’re training it on, the way that we’re interpreting the outcomes or explaining them, but we never talk about the quality of the data and the fact that the data in and of itself holds within it, the, the wounds of our society. I don’t care what people say. If you are training AI on criminal justice, um, issues, and you’re trying to make a fair and equitable AI that recognizes who should be detained or who should be released. And we all know that particular algorithm I’m talking about. If it is trained on US data, it is disproportionately going to overrepresent people of color.

So even though my friends, and I tell everybody this, just so you know, like she’s not coming in here, you know, being angry. I tell everybody you need a social scientist as a friend. I don’t care who you are. If you are a scientist, an engineer, a data scientist and you don’t have one social scientist as your friend, you’re not being honest to this problem. Right? Because what happens with that data? It comes with all of that noise. And despite our ability as scientists to sort of tease out that noise or diffuse the noise, you still have the basis and the foundation for the inequality. And so one of the things I’ve tried to tell people, it’s probably okay for us to recognize the trauma of the data that we’re using. It’s okay for us to realize that our models will be normative in the extent to which there will be bias. Technical bias, societal bias, outcome bias and prediction bias, but we should disclose what those things are. 

Anthony Green: Yeah. 

Nicol Turner Lee: And that’s where my work in particular has become really interesting to me as a person who is looking at this as, you know, the use of proxies and the use of data. For me, it becomes what part of the model is much more injurious to respondents and to outcomes. And what part should we disclose that we just don’t have the right data to predict accurately without some type of, you know, risk…

Anthony Green: Sure. 

Nicol Turner Lee: …to that population. 

Anthony Green: Yeah. 

Nicol Turner Lee: So to your question, I think if we acknowledge that, you know, I think then we can get to a point where we can have these honest conversations on how we bring interdisciplinary context to certain situations.

Anthony Green: We’ve got another question.

Kyle, audience member: Hi Nicol.

Nicol Turner Lee: Hey.

Kyle, audience member: I’m grateful for your perspective. Um, my name is Kyle. I run… I’m a data scientist by training and I run a team of AI and ML designers and developers. And so, you know, it scares me how fast the industry’s evolving. You mentioned GPT-3. We’re already talking about GPT-4 is in the works and the exponential leap and capabilities that’s gonna present. Something that you mentioned that really struck me is that legislators don’t understand what we’re doing.  And I don’t believe that us as data scientists should be the ones making decisions about how to tie our hands behind our backs. 

Nicol Turner Lee: Yeah. 

Kyle, audience member: And how to protect our work from having unintended consequences.

Nicol Turner Lee: Yes. 

Kyle, audience member: So how do we engage and how do we help legislators understand the real risks and not the hype that is sometimes heard or perceived in the media? 

Nicol Turner Lee: Yeah, no, I love that question. I’m actually gonna flip it. And I’m gonna talk about it in two ways in which I actually talk about it. So I do think that legislators who work in this space, particularly in those sensitive use cases.

So I tell people, I give this example all the time. I love shopping for boots and I’m okay with the algorithm that tells me as a consumer that I love boots, but as Latonya Sweeney’s work has indicated if you associate other things with me. Uh, what other, uh, attributes does this particular person have? When does she buy boots? How many boots does she have? Does she check her credit when she’s buying boots? What kind of computer is she using when she’s buying her boots? If you become to make that accumulative picture around me, then we run into what Dr. Sweeney has talked about—these associations that create that type of risk. 

So to your first question, I think you’re right. That policy makers should actually define the guardrails, but I don’t think they need to do it for everything. I think we need to pick those areas that are most sensitive. The EU has called them high risk. And maybe we might take from that, some models that help us think about what’s high risk and where should we spend more time and potentially policy makers, where should we spend time together?

I’m a huge fan of regulatory sandboxes when it comes to co-design and co-evolution of feedback. Uh, I have an article coming out in an Oxford University press book on an incentive-based rating system that I could talk about in just a moment. But I also think on the flip side that all of you have to take account for your reputational risk.

As we move into a much more digitally advanced society, it is incumbent upon developers to do their due diligence too. You can’t afford as a company to go out and put an algorithm that you think, or an autonomous system that you think is the best idea, and then land up on the first page of the newspaper. Because what that does is it degrades the trustworthiness by your consumers of your product.

And so what I tell, you know, both sides is that I think it’s worth a conversation where we have certain guardrails when it comes to facial recognition technology, because we don’t have the technical accuracy when it applies to all populations. When it comes to disparate impact on financial products and services.There are great models that I’ve found in my work, in the banking industry, where they actually have triggers because they have regulatory bodies that help them understand what proxies actually deliver disparate impact. There are areas that we just saw this right in the housing and appraisal market, where AI is being used to sort of, um, replace a subjective decision making, but contributing more to the type of discrimination and predatory appraisals that we see. There are certain cases that we actually need policy makers to impose guardrails, but more so be proactive. I tell policymakers all the time, you can’t blame data scientists. If the data is horrible.

Anthony Green: Right.

Nicol Turner Lee: Put more money in R and D. Help us create better data sets that are overrepresented in certain areas or underrepresented in terms of minority populations. The key thing is, it has to work together. I don’t think that we’ll have a good winning solution if policy makers actually, you know, lead this or data scientists lead it by itself in certain areas. I think you really need people working together and collaborating on what those principles are. We create these models. Computers don’t. We know what we’re doing with these models when we’re creating algorithms or autonomous systems or ad targeting. We know! We in this room, we cannot sit back and say, we don’t understand why we use these technologies. We know because they actually have a precedent for how they’ve been expanded in our society, but we need some accountability. And that’s really what I’m trying to get at. Who’s making us accountable for these systems that we’re creating?

It’s so interesting, Anthony, these last few, uh, weeks, as many of us have watched the, uh, conflict in Ukraine. My daughter, because I have a 15 year old, has come to me with a variety of TikToks and other things that she’s seen to sort of say, “Hey mom, did you know that this is happening?” And I’ve had to sort of pull myself back cause I’ve gotten really involved in the conversation, not knowing that in some ways, once I go down that path with her. I’m going deeper and deeper and deeper into that well.

Anthony Green: Yeah.

Nicol Turner Lee: And I think for us as scientists, it kind of goes back to this. I Have a Dream speech. We have to determine which side of history we wanna be on with this technology folks. And how far down the rabbit hole do we wanna go to contribute? I think what the greatness of AI is our ability to have human cognition wrapped up in these repetitive processes that go way beyond our wildest imagination of the Jetsons.

And that allows us to do things that none of us have been able to do in our lifetime. Where do we want to sit on the right side of history? And how do we want to handle these technologies so that we create better scientists? 

Anthony Green: Sure. 

Nicol Turner Lee: Not ones that are worse. And I think that’s a valid question to ask of this group. And it’s a valid question to ask of yourself. 

Anthony Green: I don’t know if we can end on anything better and we’re out of time! Nicol, we can go all day but..

Nicol Turner Lee: I know. I always feel like a Baptist preacher, you know, so if I have energy about it…

Anthony Green: Choir, can you sing it? 

Nicol Turner Lee: I know, right. I can’t sing it, but you can do that I Have A Dream speech, Anthony. 

[Laughter] 

Anthony Green: Oh man. You’re putting me on the stand and I’m already on stage. 

Nicol Turner Lee: Yeah, right haha. 

Anthony Green: Nicol, thank you so much. 

Nicol Turner Lee: Thank you so much as well. Appreciate it. 

Anthony Green: Absolutely. 

Nicol Turner Lee: Thank you everybody here.

[MIDROLL AD]

Jennifer Strong: This episode was produced by Anthony Green, Erin Underwood, and Emma Cillekens. It’s edited by Michael Reilly, directed by Laird Nolan and mixed by Garret Lang. It was recorded in front of a live audience at the MIT Media Lab in Cambridge Massachusetts, with special thanks to Amy Lammers and Brian Bryson.

Jennifer Strong: Thanks for listening. I’m Jennifer Strong. 

A bioengineered cornea can restore sight to blind people

A bioengineered cornea has restored vision to people with impaired eyesight, including those who were blind before they received the implant.

These corneas, described in Nature Biotechnology today, could help restore sight to people in countries where human cornea transplants are in short supply, and for a lower price. Unlike human corneas, which must be transplanted within two weeks, the bioengineered implants can be stored for up to two years, which could help with shipping them to those who need them the most.

The cornea implant is made from collagen protein extracted from pig skin, which has a similar structure to human skin. Purified collagen molecules were processed to ensure that no animal tissues or biological components remained. The team, from Linköping University in Sweden, then stabilized the loose molecules into a hydrogel scaffold designed to mimic the human cornea, which was robust enough to be implanted into an eye. 

Surgeons in Iran and India conducted a pilot trial of 20 people who were either blind or close to losing their sight from advanced keratoconus. This disease thins the cornea, the outermost transparent layer of the eye, and prevents the eye from focusing properly. The implant restored the cornea’s thickness and curvature. All 14 of the participants who had been blind before the operation had their vision restored, with three of them achieving perfect 20/20 vision. 

While human cornea transplants in patients with keratoconus are traditionally sewn in using sutures, the team experimented with a new surgical method that’s simpler and potentially safer. They used a laser to make an incision in the middle of the existing cornea before inserting the implant, which helped the wound heal more quickly and created little to no inflammation afterwards. Consequently, the patients only needed to use immunosuppressant eye drops for eight weeks, while recipients of traditional transplants usually need to take immunosuppressants for at least a year.

One unexpected bonus was that the implant changed the shape of the cornea enough for its recipients to wear contact lenses for the best possible vision, even though they had been previously unable to tolerate them.

The cornea helps focus light rays on the retina at the back of the eye and protects the eye from dirt and germs. When damaged by infection or injury, it can prevent light from reaching the retina, making it difficult to see.

Corneal blindness is a big problem: around 12.7 million people are estimated to be affected by the condition, and cases are rising at a rate of around a million each year. Iran, India, China, and various countries in Africa have particularly high levels of corneal blindness, and specifically keratoconus.

Because pig skin is a by-product of the food industry, using this bioengineered implant should cost fraction as much as transplanting a human donor cornea, said Neil Lagali, a professor at the Department of Biomedical and Clinical Sciences at Linköping University, one of the researchers behind the study.

“It will be affordable, even to people in low-income countries,” he said. “There’s a much bigger cost saving compared to the way traditional corneal transplantation is being done today.”

The team is hoping to run a larger clinical trial of at least 100 patients in Europe and the US. In the meantime, they plan to kick-start the regulatory process required for the US Food and Drug Administration to eventually approve the device for the market.

While the implant has proved effective at treating keratoconus, the researchers believe it could also treat other eye conditions, including corneal dystrophies and scarring from infections or trauma. Additional research is required to confirm this. 

Although the cornea donor shortage is not as severe in Western countries as it is in developing countries, the implant could also help reduce waiting lists in richer nations, said Mehrdad Rafat, a senior lecturer at Linköping University who designed the implants. 

“We think it could be sold at a higher price in developed countries to balance out the cost of production so we can continue work on other eye conditions,” he said. “We are very optimistic about that.”

China has censored a top health information platform

China has censored DXY, the country’s leading health information platform. On August 9, DXY fell silent across social media, where it boasts over 80 million followers. Five of its Weibo accounts were suddenly suspended with a vague explanation that they had violated “relevant laws and regulations,” while its accounts on WeChat and Douyin (the domestic version of TikTok) stopped publishing. 

Neither these social media platforms nor DXY have released any public statement on what caused the suspension, but Nikkei Asia reported that the order came from regulators and won’t end without official approval.

The suspension has met with a gleeful social reaction among nationalist bloggers, who accuse DXY of receiving foreign funding, bashing traditional Chinese medicine, and criticizing China’s health-care system. 

DXY is one of the front-runners in China’s digital health startup scene. It hosts the largest online community Chinese doctors use to discuss professional topics and socialize. It also provides a medical news service for a general audience, and it is widely seen as the most influential popular science publication in health care. 

“I think no one, as long as they are somewhat related to the medical profession, doesn’t follow these accounts [of DXY],” says Zhao Yingxi, a global health researcher and PhD candidate at Oxford University, who says he followed DXY’s accounts on WeChat too. 

But in the increasingly polarized social media environment in China, health care is becoming a target for controversy. The swift conclusion that DXY’s demise was triggered by its foreign ties and critical work illustrates how politicized health topics have become. 

Since its launch in 2000, DXY has raised five rounds of funding from prominent companies like Tencent and venture capital firms. But even that commercial success has caused it trouble this week. One of its major investors, Trustbridge Partners, raises funds from sources like Columbia University’s endowments and Singapore’s state holding company Temasek. After DXY’s accounts were suspended, bloggers used that fact to try to back up their claim that DXY has been under foreign influence all along. 

Part of the reason the suspension is so shocking is that DXY is widely seen as one of the most trusted online sources for health education in China. During the early days of the covid-19 pandemic, it compiled case numbers and published a case map that was updated every day, becoming the go-to source for Chinese people seeking to follow covid trends in the country. DXY also made its name by taking down several high-profile fraudulent health products in China.

It also hasn’t shied away from sensitive issues. For example, on the International Day Against Homophobia, Transphobia, and Biphobia in 2019, it published the accounts of several victims of conversion therapy and argued that the practice is not backed by medical consensus. 

“The article put survivors’ voices front and center and didn’t tiptoe around the disturbing reality that conversion therapy is still prevalent and even pushed by highly ranked public hospitals and academics,” says Darius Longarino, a senior fellow at Yale Law School’s Paul Tsai China Center. 

The sudden suspension of DXY’s social media accounts caught its followers by surprise. “I don’t think they have done anything improper,” says Zhao. 

But opponents of DXY were quick to find a political explanation. On social media, some defended the suspension on the basis that DXY has questioned the efficacy of traditional Chinese medicine (TCM.) 

In April 2022, DXY published an article about Lianhua Qingwen, a modern TCM product that has been distributed by the Chinese government as a treatment for covid-19. In the now-deleted article, the DXY authors cautioned that the medicine has never been properly tested in a clinical trial, and pointed out that some of the ingredients are banned in other countries.

The article, widely read and circulated, caused the value of the pharmaceutical company behind Lianhua Qingwen, Shijiazhuang Yiling Pharmaceutical, to fall by $1.05 billion on the stock market. To most health-care professionals, the article was reasonable and uncontroversial. But it made DXY a target for TCM supporters. Since then, DXY has been accused of colluding with Western pharmaceutical companies like Pfizer, which manufactures the covid-19 treatment Paxlovid, to defame TCM. TCM supporters celebrated the suspension of DXY’s accounts as proof of the platform’s wrongdoings. 

Just as in the US, certain public health topics have become heavily politicized in China, including covid-19. Choosing a side in scientific debates in China is oftentimes equated to choosing a side in the ideological divide between China and the West.

Although DXY is the most prominent example, it’s not the first popular science publication to fall victim to social media campaigns in China. Several popular digital science publications and video channels, like Elephant Magazine and Paperclip, have been trolled and targeted by nationalist influencers for receiving funding from foreign NGOs and publishing content that paints China in a bad light. These publications either dissolved or disappeared from social media after they were censored.

DXY’s supporters fear it could go the same way. 

“This is the best domestic health education platform. It has contributed greatly to the popularization of evidence-based medicine,” Wang Zhian, a Chinese journalist whose social media accounts were censored in 2019 before he emigrated to Japan, wrote on Twitter. “I hope it can survive.” 

The Download: tech’s gender gap, and how Gen Z handles misinformation

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Why can’t tech fix its gender problem?

Despite the tech sector’s great wealth and loudly self-proclaimed corporate commitments to the rights of women, LGBTQ+ people, and racial minorities, the industry remains mostly a straight, white man’s world.

Much of the burden for changing the system has been placed on women themselves: they’re exhorted to learn to code, major in STEM, and become more self-assertive. But self-confidence and male-style swagger have not been enough to overcome structural hurdles, especially for tech workers who are also parents. Even the pandemic’s shift towards remote working hasn’t made workplaces more hospitable to women.

It wasn’t always this way. Software programming once was an almost entirely female profession. As recently as 1980, women held 70% of the programming jobs in Silicon Valley, but the ratio has since flipped entirely. While many things contributed to the shift, from the educational pipeline to the tiresomely persistent fiction of tech as a gender-blind “meritocracy,” none explain it entirely. What really lies at the core of tech’s gender problem is money. Read the full story.

—Margaret O’Mara

Google examines how different generations handle misinformation

The news: Younger people are more likely than older generations to think they may have unintentionally shared false or misleading information online—often driven by the pressure to share emotional content quickly. However, they are also more adept at using advanced fact-checking techniques, a new study from Poynter, YouGov, and Google has found.

What they found: One-third of Gen Z respondents said they practice lateral reading (making multiple searches and cross-referencing their findings) always or most of the time when verifying information—more than double the percentage of boomers.

But, but: The study relies on participants reporting their own beliefs and habits, which is a notoriously unreliable method. And the optimistic figures about Gen Z’s actual habits contrast pretty starkly with other findings on how people verify information online. Read the full story.

—Abby Ohlheiser

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Amazon wants to start offering teletherapy 
The e-commerce giant is rapidly expanding into healthcare. (Insider $)
And it’s expanding its palm print-reading payment system into dozens of Whole Foods stores. (Ars Technica)

2 The US has rejected Starlink’s broadband supply bid 🛰️
The FCC said it had failed to demonstrate that it could deliver on its promise to supply rural America with broadband. (TechCrunch
Who is Starlink really for? (MIT Technology Review)

3 Big Tech wants to build data centers on US battlefields
But Civil War preservationists are fighting back. (New Scientist $)

4 China’s economic crisis is birthing a new wave of tycoons
But they’re making their fortunes in sportswear and skincare, not tech. (Economist $)

5 Silicon Valley’s boy genius founders are joining the Great Resignation
Their money-losing businesses want experienced leadership during a tough time for the industry. (NYT $)
+ Why Steve Jobs was so fond of his turtleneck. (NYT $)

6 Air conditioning is terrible for the planet
Better building ventilation and greener units are just a few alternative solutions. (Vox)
+ The legacy of Europe’s heat waves will be more air conditioning. (MIT Technology Review)
+ Big Tech’s engineers are leaving legacy businesses for climate-focused startups. (Protocol)

7 Social media really wants shopping live streams to take off
Live ecommerce is already huge in China, but takeup has been slower elsewhere. (FT $)
+ China wants to control how its famous livestreamers act, speak, and even dress. (MIT Technology Review)

8 The rise and rise of the ebike
Amid rising gas prices, electric bikes are a cheaper alternative to cars. (WSJ $)
+ Lithium, which is essential for electric car batteries, is in short supply right now. (WSJ $)

9 Millennials are bonding with their kids over Pokémon
After 26 years, the franchise has mass-generational appeal. (WP $)
+ Fewer people are gaming now than at the height of the pandemic. (Reuters

10 Jobhunters are paying $1,000 for the perfect LinkedIn headshot
In an image-obsessed world, they’re hoping it’ll give them the edge. (WSJ $)

Quote of the day

“Cyber criminals have been eating our lunch.”

—Chris Krebs, former director of the US Cybersecurity and Infrastructure Security Agency, thinks the government has been blinded to the threat of everyday ransomware attacks due to its focus on tracking sophisticated overseas attackers, reports PC Mag.

The big story

This is the reason Demis Hassabis started DeepMind

Demis Hassabis

February 2022

In March 2016 Demis Hassabis, CEO and cofounder of DeepMind, was in Seoul, South Korea, watching his company’s AI make history. AlphaGo, a computer program trained to master the ancient board game Go, played a five-game match against Korean pro Lee Sedol and beat him 4-1, in a victory that changed the world’s perception of what AI can do.

But while the DeepMind team was celebrating, Hassabis was already thinking about an even bigger challenge. He realized that his company’s technology was ready to take on one of the most important and complicated puzzles in biology, one that researchers had been trying to solve for 50 years: predicting the structure of proteins. Read the full story.

—Will Douglas Heaven

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ 8glitchorbit’s digital art is weirdly soothing.
+ Prey, the new Predator prequel, sounds like it might just absolve the franchise’s past few horrors.
+ All hail the rise and rise of the emo leading man.
+ This is interesting: investigators are using DNA to fight back against illegal tree loggers.
+ Turtles are returning to the Mississippi mainland for the first time in four years.

Why can’t tech fix its gender problem?

A full decade has passed since Ellen Pao filed a sexual discrimination suit against her employer, the legendary Silicon Valley venture capital firm Kleiner Perkins. Two years later came the toxicity and misogyny of Gamergate, followed by #MeToo scandals and further revelations of powerful tech-business men behaving very badly. All catalyzed an overdue public reckoning over the industry’s endemic sexism, racism, and lack of representation at the top. And to what effect?

Many slickly designed diversity reports and ten thousand Grace Hopper coffee mugs later, the most striking change has been in the size and wealth of the technology sector itself. Even as the market overall turned bearish in 2022, the combined market capitalization of the five largest tech companies approached $8 trillion. Despite the sector’s great wealth and loudly self-proclaimed corporate commitments to the rights of women, LGBTQ+ people, and racial minorities, tech remains mostly a straight, white man’s world. The proportion of women in technical roles at large companies is higher than it used to be but remains a painfully low 25%. Coding schools for people of marginalized genders are expanding, and the number of female majors in some top computer science programs has increased. Yet overall, representation remains low and attrition high, especially for women of color. 

Much of the burden for changing the system has been placed on women themselves: they’re exhorted to learn to code, major in STEM, and become more self-assertive. In her 2013 bestseller Lean In, Sheryl Sandberg of Meta urged women to push harder and demand more—by acting the way men did. 

Self-confidence and male-style swagger have not been enough to overcome structural hurdles, especially for tech workers who are also parents. Even the mass adoption of remote work in the covid-19 era failed to make tech workplaces more hospitable. A recent survey by Deloitte found that a majority of women in the industry felt more pessimistic about their career prospects than they did before the pandemic. Nearly six in 10 expected to change jobs as a result of inadequate life-work balance. More than 20% considered leaving tech altogether. 

At Amazon, Apple, Google, and Microsoft, the CEO baton has passed from one man to another. Sandberg announced in June that she was stepping down, Elizabeth Holmes awaits criminal sentencing for fraud as CEO of Theranos, and the #girlboss moment has given way to swaggering performances of tech-­mogul masculinity such as Jeff Bezos, in spacesuit and cowboy hat, soaring skyward in a phallic rocket. 

After the US Supreme Court decision overturning Roe v. Wade, large tech companies were among the first to announce that they would cover the costs for employees who needed to travel to another state to end a pregnancy. But they refrained from taking positions on the ruling itself. Meta discouraged employees from talking about it on company message boards, even limiting the visibility of social media posts by Sandberg lamenting the decision. Support for abortion rights and the women advocating for it only went so far.

Much of tech’s gender problem is a corporate America problem. Women, especially women of color, remain grossly underrepresented in top executive ranks across sectors. But tech is an industry that promised to think different, change the world, and make money without being evil. It also has a long history of employing many technical women.

Software programming once was an almost entirely female profession. As recently as 1980, women held 70% of the programming jobs in Silicon Valley. That ratio has completely flipped. Female technicians once outnumbered male workers on the Valley’s hardware assembly lines by more than two to one. Those jobs are now nearly all overseas. In 1986, 36% of those receiving bachelor’s degrees in computer science were women. The proportion of women never reached that level again.

Many things contributed to the shift: the educational pipeline, the tech-geek stereotypes, the industry’s long-standing and enthusiastic reliance on hiring by employee referral, the tiresomely persistent fiction of tech as a gender-blind “meritocracy.” None explain it entirely. 

What really lies at the core of tech’s gender problem is money.

The technology industry has generated significant, and sometimes enormous, personal fortunes. Most of this money has gone to men. Tech executives have become the richest people in human history. Only two women currently appear on the list of tech’s 20 richest people: one is a widow of a male tech billionaire, the other an ex-wife of one. 

Venture capital investment has been and remains the tech ecosystem’s least diverse domain. White and Asian men make up 78% of those responsible for investing decisions and manage 93% of venture dollars overall. While there are now more female-led investment funds than there were a few years ago, the majority of venture capital firms still have zero women as general partners or fund managers. 

Of the few women in these roles, nearly all are white. The US venture capital industry invested a record-breaking $329 billion across more than 17,000 deals in 2021. Only 2% of this bonanza went to startups founded solely by women—the lowest level since 2016. Less than 0.004% of the venture capital invested in the first half of 2021 went to startups with Black female founders. 

The lack of investor and founder diversity has far-reaching consequences. It does not only determine who gets rich. It also shapes the kinds of problems technology companies set out to solve, the products they develop, and the markets they serve. 

The patterns seen today in venture capital firms have been more than seven decades in the making. That is one reason they are so difficult to unwind. But there’s another, thornier problem. The things that have worked against venture diversity—and tech diversity in general—have also been secrets of the American technology industry’s success since the very start.

Beginnings: “A future without boundaries”

There was never really a golden age for women in tech. If a job was female-dominated, it often paid less and was valued less, and its occupants were considered easily replaceable. When women did the same jobs as men, they were regarded as a curiosity, a blip in a male-dominated corporate world. 

In 1935, IBM chief executive Thomas Watson Sr. made a great show of hiring 35 newly minted college graduates as his company’s first class of “Systems Service Women,” tasked with giving technical support to new customers. Men held these jobs too, but only the women spent their first week of employment being feted like debutantes, welcomed with bouquets of flowers and a formal dinner dance hosted by Watson. 

The women who programmed wartime computer projects in the 1940s were first called “operators,” their jobs seemingly little different from those held by the thousands of fast-thinking women who sat before the nation’s telephone switchboards. With the arrival in the early 1950s of program compilers—a technology and term invented by a woman—the workers became “coders,” a word reflecting a persistent misunderstanding of programming as something mechanistic, practically stenographic. 

woman sitting at a control desk
A female operator at the control desk of the world’s fastest calculator, the IBM Selective Sequence Electronic Calculator, in IBM’s New York offices in 1948.
KEYSTONE/GETTY IMAGES

Around the same time, IBM executives placed mainframes in the lobby of the company’s New York City headquarters and hired female programmers to work in view of passerby. That way, one supervisor explained to a female recruit, the machines “will look simple and men will buy them.” 

Meanwhile, corporations aggressively recruited technical men, promising good paychecks and likely promotion. In the late 1950s and early 1960s, the gender-segregated classified employment pages brimmed with ads enticing male engineers through promises like “Your own enthusiasm and professional growth are the only limits to a future without boundaries.” 

Early computing history abounds with these stories, reflecting the endemic sexism of American corporate culture before equal-opportunity laws and other victories of modern feminism. Notably, the women who rose to senior technical positions during this period often worked for military agencies or at NASA, where clearly codified standards for promotion better protected women from managerial whims.

Growth: “The Olympics of capitalism”

While technical women stayed mainly within large organizations, male engineers gradually began to leave academia and corporate life to start their own companies. This entrepreneurial model reached its apex in Northern California’s Santa Clara Valley. 

Stanford University–trained engineers had been starting companies in local garages and disused farm buildings since the 1930s, but it wasn’t until the 1950s that the Valley became a tech powerhouse. Cold War spending transformed Stanford, filled the Valley with defense contractors, and fueled growth of a new cluster of silicon-semiconductor startups. The firms gave Silicon Valley its name, built many of its first great fortunes, and left an indelible imprint on its corporate culture.

Life in early Silicon Valley chip firms was like Mad Men with fewer suits, more all-nighters, and the occasional screaming match over circuit-board design. Secretaries were usually the only women in sight. Employees were expected to show up before 8 a.m., work as late as they could bear, and then go out for beers. The countercultural 1960s never really happened in the semiconductor industry; this was engineering, not an encounter group. Management rewarded rational minds and thick skins. “I hired you,” National Semiconductor executive Don Valentine once told a new recruit, “because you were the only one I couldn’t intimidate.” 

Making all this intensity possible were stay-at-home wives—the most hidden of tech’s hidden figures, whose care of children and home allowed for their husbands’ total work immersion. The rare female executive had to keep pace, acting as if similarly unbothered by personal demands, sneaking phone calls to her children on the side. 

rows of female workers seated in front of controls
Women were the mainstay of Fairchild Semiconductor’s busy production line in 1964.
THE MERCURY NEWS VIA GETTY IMAGES

By the 1970s, the success of these firms had minted hundreds of millionaires, most men in their early 30s. High-tech entrepreneurship, one Valley investor declared, was “the Olympics of capitalism.”

Not competing in this Olympics, but still contributing to the industry’s success, were the thousands of women who worked in the Valley’s microchip fabrication plants and other manufacturing facilities from the 1960s to the early 1980s. Some were working-class Asian- and Mexican-Americans whose mothers and grandmothers had worked in the orchards and fruit can­neries of the prewar Valley. Others were recent migrants from the East and Midwest, white and often college educated, needing income and interested in technical work. 

With few other technical jobs available to them in the Valley, women would work for less. The preponderance of women on the lines helped keep the region’s factory wages among the lowest in the country. Women continue to dominate high-tech assembly lines, though now most of the factories are located thousands of miles away. In 1970, one early American-owned Mexican production line employed 600 workers, nearly 90% of whom were female. Half a century later the pattern continued: in 2019, women made up 90% of the workforce in one enormous iPhone assembly plant in India. Female production workers make up 80% of the entire tech workforce of Vietnam. 

Venture: “The Boys Club”

Chipmaking’s fiercely competitive and unusually demanding managerial culture proved to be highly influential, filtering down through the millionaires of the first semiconductor generation as they deployed their wealth and managerial experience in other companies. But venture capital was where semiconductor culture cast its longest shadow. 

The Valley’s original venture capitalists were a tight-knit bunch, mostly young men managing older, much richer men’s money. At first there were so few of them that they’d book a table at a San Francisco restaurant, summoning founders to pitch everyone at once. So many opportunities were flowing it didn’t much matter if a deal went to someone else. Charter members like Silicon Valley venture capitalist Reid Dennis called it “The Group.” Other observers, like journalist John W. Wilson, called it “The Boys Club.”

From left to right: Gordon MOORE, C. Sheldon ROBERTS, Eugene KLEINER, Robert NOYCE, Victor GRINICH, Julius BLANK, Jean HOERNI and Jay LAST.
The men who left the Valley’s first silicon chipmaker, Shockley Semiconductor, to start Fairchild Semiconductor in 1957 were called “the Traitorous Eight.”
WAYNE MILLER/MAGNUM PHOTOS

The venture business was expanding by the early 1970s, even though down markets made it a terrible time to raise money. But the firms founded and led by semiconductor veterans during this period became industry-defining ones. Gene Kleiner left Fairchild Semiconductor to cofound Kleiner Perkins, whose long list of hits included Genentech, Sun Microsystems, AOL, Google, and Amazon. Master intimidator Don Valentine founded Sequoia Capital, making early-stage investments in Atari and Apple, and later in Cisco, Google, Instagram, Airbnb, and many others.

Generations: “Pattern recognition”

Silicon Valley venture capitalists left their mark not only by choosing whom to invest in, but by advising and shaping the business sensibility of those they funded. They were more than bankers. They were mentors, professors, and father figures to young, inexperienced men who often knew a lot about technology and nothing about how to start and grow a business. 

“This model of one generation succeeding and then turning around to offer the next generation of entrepreneurs financial support and managerial expertise,” Silicon Valley historian Leslie Berlin writes, “is one of the most important and under-recognized secrets to Silicon Valley’s ongoing success.” Tech leaders agree with Berlin’s assessment. Apple cofounder Steve Jobs—who learned most of what he knew about business from the men of the semiconductor industry—likened it to passing a baton in a relay race.

portrait of George Doriot
Georges Doriot, “the Father of Venture Capital,” declared, “An average idea in the hands of an able man is worth much more than an outstanding idea in the possession of a person with only average ability.”
portrait of Don Valentine
Master intimidator Don Valentine founded Sequoia Capital, making early-stage investments in Atari and Apple, and later in Cisco, Google, Instagram, Airbnb, and many others.

Venture capitalists often believed that the person was as important as the product, if not more so. “An average idea in the hands of an able man,” declared Georges Doriot, the Harvard Business School professor known as “the Father of Venture Capital,” “is worth much more than an outstanding idea in the possession of a person with only average ability.”

One surefire way to find “able men” was to fund or recruit people you had successfully worked with before. This is another critical dimension of the Silicon Valley model: tightly knit networks that often work together in multiple startups. The most famous of these groups acquired nicknames. The men who left the Valley’s first silicon chipmaker, Shockley Semiconductor, to start Fairchild Semiconductor were called “the Traitorous Eight.” Four decades later, a group of men, many of whom had met writing for Stanford’s conservative student newspaper (including Peter Thiel, who cofounded it), became a core part of the founding team of PayPal. With the company’s acquisition, they became “the PayPal Mafia,” using their wealth to found new venture-backed companies and become investors in many others. 

Venture capital firms became the connective tissue joining clusters of fortunate coworkers into an even larger network. One of the first firms to invest in PayPal was Sequoia Capital.

When it came to people an investor didn’t already know, reliance on personal attributes and a healthy dose of gut feeling led venture partners to bet on founders who seemed to share a lot of the same qualities as those who had succeeded before—in short, people like those already in their networks. “Pattern recognition” was how Kleiner Perkins partner John Doerr once put it. The most successful founders “all seem to be white, male nerds who’ve dropped out of Harvard or Stanford, and they absolutely have no social life”; when they showed up in his office, he said, he knew it was time to invest.

portrait of Eugene Kleiner
Gene Kleiner left Fairchild Semiconductor to cofound Kleiner Perkins, whose long list of hits included Genentech, Sun Microsystems, AOL, Netscape, Google, and Amazon.
portrait of John Doerr
As John Doerr once put it, the most successful founders “all seem to be white, male nerds who’ve dropped out of Harvard or Stanford, and they absolutely have no social life.”

The remark was an unintended gaffe—don’t say the quiet part out loud!—but it was true. Doerr had risen up the ranks at Intel and took what he had learned in chipmaking to build one of the most successful venture careers in tech history. He funded and mentored Marc Andreessen of Netscape, Sergey Brin and Larry Page of Google, and Jeff Bezos of Amazon—all men. 

Doerr and venture capitalists before and after him were eager to absorb new ideas, but their experiences had persuaded them that tech was a meritocracy and allowed them ignore the exclusion perpetuated by Silicon Valley’s tight-knit networks. 

Money: “The Golden Geeks”

Hardware companies dominated both Silicon Valley and Boston through the 1970s. Software was rarely a stand-alone product but was bundled into a computer purchase or given away free. This helps explain why many women continued to hold programming jobs even as the field professionalized, rose in prestige, and came to be regarded by many corporations as an environment best suited for “antisocial, mathematically inclined males.” 

When desktop computers first arrived on the market, some employers embraced programming as a job perfect for working mothers, who could plug in a modem and code from home between school pickups and household chores. That moment was short-lived, for the personal computer business also created the immensely profitable desktop software industry. Programming was no longer just for introverts and elementary school moms. It minted billionaires. 

Steve Jobs and Bill Gates each on a Time magazine cover
Time magazine covers published in 1982 and 1984

The colossus at the industry’s center was Microsoft, led by the most famous software geek of all, Bill Gates. By the late 1990s, the company’s products ran on over 90% of the personal computers on the planet. Gates was the world’s richest man and Valley venture capitalists were early-stage investors in his company. On Microsoft’s campus outside Seattle, armies of software engineers worked seven days a week. The workforce was so overwhelmingly male that one observer called it “the frat house from another planet.” Microsoft’s stock awards turned roughly 10,000 employees, mostly men and many under 30, into millionaires. Money ruled the 1980s, the 1990s, and beyond. “Striking It Rich,” read a 1982 Time headline hovering over a depiction of Apple’s Jobs on the magazine’s cover. Gates followed in 1984, twirling a floppy disk. In 1996, Time handed the crown to Netscape cofounder Andreessen. “The Golden Geeks,” the magazine crowed, picturing the 24-year-old multimillionaire hamming it up while sitting barefoot on a gilded throne. 

Power: “I’m CEO … bitch”

After 2000, Silicon Valley was the undisputed high-tech capital, no longer just a place in California but shorthand for the industry itself. Founders of this new generation had a new set of mentors to learn from and admire. Jobs’s triumphant 1997 return to Apple after being forced out over a decade earlier had made him a business legend. His untimely death in 2011 further enshrined his legacy as the founder to emulate. 

Andreessen was now a successful venture capitalist dispensing managerial wisdom over coffee and pancakes, just as an older generation had done for him decades before. “He became a sounding board about management and how to build a strong technology company,” recalled Mark Zuckerberg of the regular meetups he had with Andreessen in the early days of Facebook. “He has strong views on that, and they helped shape mine.” 

The new generation of founders tended to be younger and brasher. Men who had spent their boyhoods staring into computer screens now had power, money, and swagger. A few months into Facebook’s existence, Zuckerberg realized he needed business cards. He ordered up two versions. One simply said “CEO.” The other: “I’m CEO … bitch!” 

The workplace cultures of today’s large technology companies are as all-consuming as those of any early chipmaker. And the perks that firms showered on their white-collar employees pre-pandemic said a lot about the kinds of workers tech companies most valued. In 2017, Apple moved into an extraordinary new $5 billion headquarters witha two-story yoga room and seven cafes. Although it was designed to hold 12,000 employees, it did not have a child-care center. 

Tech’s reckoning?

Today, the baton is passing to crypto enthusiasts and Web3 evangelists. While the cast of characters is slightly more diverse than it once was, the potential superstars of the next generation—Coinbase’s Brian Anderson and FTX’s Sam Bankman-Fried, to name two—remain mostly white and male. 

Tech’s gender reckoning has been among a number of things fueling a new wave of employee activism. For the first time, Silicon Valley’s white-collar employees are speaking out publicly against their employers and, in some instances, successfully pressuring  them for changes to corporate practices. 

portrait of Timnit Gebru
Computer scientist Timnit Gebru was fired from Google, reportedly because of the company’s discomfort with her research findings. She has since become a powerful critic of Silicon Valley business and research practices.
AP PHOTO/JEFF CHIU

One striking thing about today’s activists, organizers, and whistle­blowers is that nearly all of them are female, gender-nonconforming, or queer. Several are nonwhite. Outside and less beholden to tech’s charmed circles, they have been able to see tech’s problems more clearly. Women were six of the seven organizers of the 20,000-strong Google walkout in 2018, which protested the $90 million severance package awarded to top executive Andy Rubin after credible claims of sexual harassment. Computer scientist Timnit Gebru was recruited to Google because of her groundbreaking work on algorithmic bias and then was fired, reportedly because of the company’s discomfort with her findings. She has since become a powerful critic of Silicon Valley business and research practices. Data scientist Frances Haugen worked at Google, Yelp, and Pinterest before she came to Facebook, where alarm at the company’s business practices prompted her to copy thousands of pages of internal documents and leak them to reporters. (Haugen admitted that she was able to blow the whistle at Facebook because her tech career had made her wealthy enough to leave her job.)

Within companies, employee activism grows by the day. It is not only changing the culture but also—quite remarkably, given Silicon Valley’s history—fueling cross-class support for employee unionization. Women and gender-diverse employees are on the front lines of these movements as well. 

The tech industry loves to talk about how it is changing the world. Yet retrograde, gendered patterns and habits have long fueled tech’s extraordinary moneymaking machine. Breaking out of them might ultimately be the most innovative move of all. 

Historian Margaret O’Mara is the author of The Code: Silicon Valley and the Remaking of America.

Google examines how different generations handle misinformation

A habit called “lateral reading” is a core part of any good fact-checking routine. It means opening up a bunch of tabs and doing multiple searches to verify the facts, source, or claims made in a piece of online information. So it seemed like great news when a new study from Poynter, YouGov, and Google indicated that Generation Z is adopting this technique more than any previous generation. 

The study, released today by Google as the search engine team there rolls out several changes to how it handles misinformation, asked more than 8,000 people ranging in age from Generation Z (defined for this study as those 18 to 25) to the Silent Generation (68+), across seven countries, about misinformation and how they research questionable content online. 

Essentially, the study concludes that younger people are more likely to think they may have unintentionally shared false or misleading information—often driven by the pressure to share emotional content quickly. However, they are also more adept at using advanced fact-checking techniques. 

One-third of Gen Z respondents said they practice lateral reading always or most of the time when verifying information—more than double the percentage of boomers. About a third of younger people also said they run searches on multiple search engines to compare results, and go past the first page of search results. 

Portions of the survey provide an interesting snapshot of how people of different ages, and in different locations, experience misinformation and think about their own role in stopping or spreading it: 62% of all respondents believe they see misinformation online every week, for instance. Gen Z, millennial, and Gen X readers are more confident in their ability to spot misinformation and more concerned that their close family and friends might believe something misleading online. 

However, the study relies on participants to accurately report their own beliefs and habits. And the optimistic figures about Gen Z’s actual habits contrast pretty starkly with other findings on how people verify information online. 

Sam Wineburg, a Stanford University professor who studies fact-checking practices, thinks he knows why that might be: when you’re trying to understand how people actually behave on the internet, “self-report,” he says, “is bullshit.” 

“What people say they do versus what they do do?” he adds. “That discrepancy goes back to the earliest days of social psychology.” His own research has found that without intervention, younger people seldom use lateral reading or other advanced fact-checking techniques on their own.  

In one recent study led by Wineburg and his team at Stanford, researchers wanted to learn whether an online course in fact-checking techniques could improve how college students verify information. Before the course, just three of the 87 students they tested engaged in lateral reading, meaning in this case that they left the website they were asked to evaluate to consult an outside source. 

“If people spontaneously did [lateral reading], we’d all be in a lot better shape,” Wineburg said. 

In a larger study, more than 3,000 high school students were asked to investigate a series of online claims. The results were pretty bleak: more than half the students tested believed that an anonymous Facebook video filmed in Russia contained “strong evidence” of US voter fraud. (Full disclosure: I was a participant in an earlier study from Wineburg’s team that observed the methods of fact checkers and compared them with those used by historians and Stanford undergraduates.) 

Gen Z clearly uses the internet differently from previous generations. But young people are also susceptible to the same traps, weaponized misinformation tactics, and pressure to share that have fueled bad online practices for years. 

The Download: psychedelics for women, and Roe v. Wade online

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Psychedelics are having a moment and women could be the ones to benefit

Psychedelics are having a moment. After decades of prohibition and vilification, they are increasingly being employed as therapeutics. Drugs like ketamine, MDMA, and psilocybin mushrooms are being studied in clinical trials to treat depression, substance abuse, and a range of other maladies. And as these long-taboo drugs stage a comeback in the scientific community, it’s possible they could be especially promising for women.

Is this the beginning of a brighter future for women’s health, one where common mental disorders, symptoms of chronic pain, and intense mood swings are managed with mind-altering trips? While psychiatrists are optimistic, they are rightly concerned about the potential for abuse. Read the full story.

—Taylor Majewski

The cognitive dissonance of watching the end of Roe unfold online

When the United States Supreme Court reversed Roe v. Wade on the morning of June 24, 2022, thousands of people first heard the decision by reading SCOTUSblog, a news site launched 20 years ago. Katie Barlow, the blog’s media editor, was one of the few correspondents on camera the moment the opinion was released, reading it out to her audience on TikTok. It was a fitting way to enter the official post-Roe age: on platforms that can feel so personal to their publics, even as history unfolds.

Back in 1973, an issue of Time magazine appeared on newsstands, announcing that “abortion on demand” had been legalized by the Supreme Court, scooping the court’s own announcement by a few hours. In 2022, the phone might still be how you learned of the decision made by six justices, but now that phone could also give an instant voice to millions whose rights were rolled back with their ruling. And it’s also the device that could let us help someone we have never met before travel to a state where abortion is still legal. Read the full story.

—Melissa Gira Grant

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Facebook gave a teenager’s messages about her abortion to police
The evidence is being used to prosecute her and her mother. (Motherboard)
+ Big Tech remains silent on questions about data privacy in a post-Roe US. (MIT Technology Review)

2 Conspiracy theorists are staking out voting drop boxes
They’re trying to catch fraudsters submitting fake ballots, but the legality of such surveillance differs state-to-state. (NYT $)

3 What sets bitcoin apart from other cryptocurrencies
Other coins haven’t weathered the crash as successfully. (Protocol)
+ It’s okay to opt out of the crypto revolution. (MIT Technology Review)

4 Chip makers are bracing themselves for a slump
Just as Joe Biden signed the CHIPS Act. (WSJ $)
+ Congress still needs to decide who will benefit from the legislation’s ample funding. (WP $)
+ Taiwan is pushing Foxconn to drop investment in a Chinese chips firm. (FT $)

5 5G is riddled with security vulnerabilities
It could take years to fully secure it. (Wired $)
+ Not all 5G coverage is built equally. (Android Police)

6 Russia’s propaganda is spreading beyond the West
It’s an attempt to justify and garner support for invading Ukraine from other countries. (NYT $)
+ TikTok is ‘shadow-promoting’ videos uploaded from Russia. (Wired $)

7 NASA’s space shuttle has a troubled legacy
50 years after it launched, its achievements are matched by its unfulfilled potential. (Slate)
+ SpaceX’s latest Super Heavy prototype orbital flight test was a success. (Gizmodo)

8 TikTokers are ‘shifting’ into new, desired realities
They’re partly motivated by the depressing current political climate. (Input)
+ There’s still no evidence that we’re living in a simulation though, I’m afraid. (The Guardian)

9 Google wants Apple to fix text messaging
It’s blaming Apple for the war of the blue and green bubbles. (Insider $)

10 Robot arms are coming to Japan’s convenience stores 🦾
Next stop: the US. (Bloomberg $)

Quote of the day

“In 10 years, will this be an Airbnb village?”

—Rhys Tudur, a member of Gwynedd council in Wales, despairs at the rise in landlords evicting tenants in favor of running lucrative Airbnbs in the local area, he tells the Guardian.

The big story

How Amazon Ring uses domestic violence to market doorbell cameras


September 2021

When Ring launched ten years ago with a crowdfunding campaign, the market for home surveillance cameras and video doorbells barely existed. Now Ring has it cornered.

Despite the company’s focus on police partnerships, it’s unclear how much the cameras actually help in deterring or solving crimes. Meanwhile, civil liberties groups have raised concerns about how Ring’s cameras and app may lead to racial profiling, excessive surveillance by police, and a loss of privacy.

As these doorbell cameras have become more widespread, law enforcement agencies have experimented with using them in more targeted ways, including to address one of the most intimate and complicated of crimes: domestic violence. But some experts in the field are concerned that initiatives in partnership with the police inject a combination of potentially dangerous factors into the lives of those they are supposed to protect. Read the full story.

—Eileen Guo

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Bandit the home-guarding cat deserves a medal for his service (Thanks Craig!)
+ The eternal dilemma: should you buy a PC or build your own?
+ Nu metal dressing is the trend of the summer.
+ An intriguing introduction to America’s river wanderers of yesteryear.
+ This may be the only acceptable form of Croc.