News & Ideas

Episode 210: An Unconventional Path to Computer Science

Portrait of Fernanda Viégas
Photo courtesy of Fernanda Viégas

Episode 210: An Unconventional Path to Computer Science, with Fernanda Viégas

Click on the audio player above to listen to the episode or follow BornCurious on Amazon Music, Apple, Audible, Spotify, and YouTube.

On This Episode

In her work as a computer scientist, Fernanda Viégas focuses on data visualization and people-centered machine learning—but her background is in graphic design. So how did she land where she is today? In this episode, our hosts talk with Viégas about her unconventional path, her experience in the world of STEM, and what it’s like to sometimes be the only woman in the room. In addition, they talk about how taking a people-centered approach can make the field more inclusive.

This episode was recorded on February 29, 2024.
Released on May 9, 2024.

The conversation follows Episode 209.

Guest

Fernanda Viégas is a Sally Starling Seaver Professor at Harvard Radcliffe Institute, a Gordon McKay Professor of Computer Science at the Harvard John A. Paulson School of Engineering and Applied Sciences, and an affiliate with Harvard Business School. With her longtime collaborator, Martin Wattenberg, she coleads Google’s People + AI Research (PAIR) initiative, which advances the research and design of people-centric AI systems.

Related Content

Fernanda Viégas: Fellowship Biography

Fellow’s Talk: What’s Inside a Generative Artificial-Intelligence Model? And Why Should We Care?

People + AI Research

Credits

Ivelisse Estrada is your cohost and the editorial manager at Harvard Radcliffe Institute (HRI), where she edits Radcliffe Magazine.

Kevin Grady is the multimedia producer at HRI.

Alan Catello Grazioso is the executive producer of BornCurious and the senior multimedia manager at HRI.

Jeff Hayash is a freelance sound engineer and recordist.

Heather Min is your cohost and the senior manager of digital strategy at HRI.

Anna Soong is the production assistant at HRI.

Mahbuba Sumiya is a multimedia intern at HRI and a Harvard College student.

Transcript

Heather Min:
Hello. Welcome back to BornCurious, coming to you from Harvard Radcliffe Institute, one of the world’s leading centers for interdisciplinary exploration. I am your cohost, Heather Min.

Ivelisse Estrada:
And I’m your cohost, Ivelisse Estrada.

Heather Min:
This podcast is, like its home, about unbounded curiosity.

Ivelisse Estrada:
If you listened last week, you know that we did a deep dive with Fernanda Viégas on artificial intelligence, specifically, how it works—and where it falls short. And you may remember that she mentioned that her background is in graphic design. So, in this episode, we’re back with Fernanda—this time, to explore her unconventional path to computer science and to talk about the importance of attracting more women to technology.

Heather Min:
Welcome back, Fernanda. We would love to hear more about how you’ve come to work in this extremely exciting, cutting edge, important field. Could you share some of your origin story?

Fernanda Viégas:
Sure. So, I am definitely not your orthodox computer scientist. In fact, if you know my story, you’re like, wait, what? How’d you go from A to B? So, I am from Brazil. I did not grow up hacking computers or building computer games. No, not at all. I was not interested in computers at all when I was growing up.

And I ended up coming to the US because of a problem in the Brazilian educational system—it was a problem for me, it’s not a problem for many people, but it was for me—which is in Brazil, when you finish high school and you try to go to college, we don’t have SATs. Each university has its own set of exams that you need to take, and that’s fine. But the thing that really got me was that not only do you do an entrance exam to a specific university, you need to choose your major. And so, you enter for that major. If you later on decide you don’t want that major, you have to get out of the university, wait a year, because these only happen once a year, and you do another exam for the other major.

Heather Min:
High stakes.

Fernanda Viégas:
It is. And in my case, because I’m very undecided, I did this three times, which meant I spent three years of my life not knowing what I wanted, studying for entrance exams, and still not knowing what I wanted. And so, at the end of these three years, I was like, “I don’t think I’m made for a university. Maybe I’m missing something here.” And I started teaching English as a second language to little kids. And then one of my English teachers actually said to me one day, she’s like, “Fernanda, have you thought about applying to this undergraduate scholarship program in the US?” And I was like, “Why would I leave my country to go to another country, to still not know what I want to do?”

All:
[Laugh]

Fernanda Viégas:
And she said, “Because in the US you can be undecided. You can change majors as many times as you want.” I was like, “Really? That exists?” She was like, “Yes, it exists.” I was like, “I have to go there.”

And so long story short, I got a scholarship to come to the US. I was an education major, and then I changed to graphic design and art history, loved my majors, but then also in being very true to myself, as I was about to graduate, I was like, I love design, but I don’t want to be a traditional designer, which back in those days meant designing CD covers and DVD covers. And I was like, I like that, but I don’t know what I want to do still. And that’s when I heard about the Media Lab at MIT, and I was like, that’s a place that welcomes people from different backgrounds, but I also heard that MIT is super hard, so what should I do?

So, I tried in my last semester to learn as much as I could about programming. It was hard. I put together the best digital portfolio I could, and somehow, I got into the Media Lab, and that’s where I learned about data visualization, which was a really wonderful way to bring my graphic design skills to a medium where most people who were doing that kind of work were real computer scientists and not designers. But it just gave me a really interesting way to think about computation visually.

And so I loved that. I started working on that. And then I met Martin Wattenberg, with whom I would work for 23 years now. I was his intern at IBM, he was a researcher at IBM. Martin is a mathematician. And we got along really well, and we started working together. That’s where I went after the Media Lab. So Martin and I, for many years, did data visualization, and we ended up at Google as researchers with our own research group there. And because we were at Google and we were doing data visualization, the researchers sitting next to us—this was early enough that the researchers sitting next to us were starting to do machine learning. And we’re like, “This machine learning stuff sounds interesting. We don’t know if it’s very useful, but maybe we should check it out.”

And then at one point, one of the researchers came over and said, “Guys, you do data visualization, right?” We’re like, “Yeah.” They’re like, “We’re building these systems. They’re very complex. They’re very convoluted. We don’t always get a good sense of what they’re doing. And we really think data visualization could help. Would you be willing to think about how we could visualize what these systems are doing?” We’re like, “Sure, let’s talk.” And that’s how we started learning and working with AI.

And eventually our group ended up becoming part of Google Brain, which was one of the big machine learning core teams at Google. And so that’s when we started—more and more of our work ended up being about, how do you visualize these systems? How do you make sense of these systems? What is happening inside of them? And so that was really exciting, and that’s how I ended up working on this.

Ivelisse Estrada:
Yeah. And you are a woman in a very male-dominated field.

Fernanda Viégas:
Yes.

Ivelisse Estrada:
What has that been like? Have you noticed any differences?

Fernanda Viégas:
Oh, yes. Yes. I tell a story of working with Martin, because Martin and I are peers—he’s a man, I’m a woman. And after we had been working together for, I don’t know, maybe five years or four years, I asked Martin—because we would be partners, coauthors. We’re doing all of this stuff together. And I said, “Hey, Martin, have you noticed—” I forget what meeting we’re in, but we were in some meeting with someone, and I said to Martin afterwards, “Have you noticed how that person was only talking to you?” And Martin was like, “Nah.” I was like, “Yeah. Notice, notice. He was just paying attention to you. Even when I asked questions, he was answering you—looking at you.” And Martin was like, “No, really?” I was like, “Yes, really. Just pay attention next time.”

And so we had these conversations a couple of times, and then eventually Martin started noticing, and he was like, “You’re right. Oh my gosh. That person was only talking to me, was only addressing me.” I’m like, “Yeah. That’s a thing, isn’t that?” And so, Martin became aware and started, whenever this would—this wouldn’t happen all the time, but whenever it would happen, Martin would immediately be like, “So Fernanda, here: what do you think? You should ask Fernanda, really—she’s the expert on this,” or something. And so, we started having this little song and dance about it. One, it was just interesting to me that he was completely unaware of this. So it was just, again, let’s be aware of what the problem is here. So having had that conversation with him, he became aware, he became truly this huge ally. And to the point where, years later, at Google, multiple times this thing happened, where we would come out of a big meeting with lots of people and eventually Martin would say, “Did you notice you were the only woman in the room?” I was like, “No, I didn’t even notice.” He was like, “You were, and I was glad you were saying this and this and that, and blah, blah, blah.” And so, he was very aware.

So yes, that has been the case. I worry about that. That’s why, as I was saying before, I wonder how much of the change in this technology could mean that we start to bring more women to the fold. Because I think women are interested in this kind of work, but I think there is this barrier where it’s like, if it’s math or if it’s engineering, it’s not for me. I think there are different ways we can think about this kind of technology that can get us through these barriers. So, I’m very curious about what having language as a UI again will do, to bring people who are usually not part of this.

Being part of the industry, it is very clear to me how powerful it is to have a seat at the table, and to have a voice, and to be part of the teams that are building, thinking, designing these technologies. And so, I think it’s incredibly important that we bring in more women. Unfortunately, throughout my tenure at Google, it was already the case, as it’s no news to anyone, that there were so many more men than women. But I’ll tell you, once I joined the machine learning groups, it was even worse. It was even worse. So, I think I went from always having only 20 percent of the people in the room be women to all of a sudden having only 10 percent of the people in the room being women. That’s in the wrong direction, people. And I think we need to really be mindful of what we do.

And so, this makes me think again about this class that we teach here at Harvard, this artistic computation class. By the way, if anyone is a student and is listening, it’s CS73. And one of the things we saw in this class, one is we did not know if undergrads would be interested in this, because it’s a kind of class that had never been taught at Harvard before. And so, we were like, “What if we end up with five students only?” And we’re like, “Well, we will teach—”

Ivelisse Estrada:
Call it a seminar.

Fernanda Viégas:
That’s right. That’s right. To our surprise, it was a huge waiting list. And not only that, there were a lot of women. I think this—again, it’s not machine learning, it’s not AI, but again, it just shows me, that if we all start to broaden our understanding of what computation is and what it can do, I think so many more different kinds of people will be interested and will do interesting things with these technologies, things we haven’t even dreamt of yet.

And so, I think this is one of the things that I’m really hopeful about, as we maybe don’t rely on people having to go through years of learning how to program, but still being able to interact with these systems and build things that are helpful, useful, exciting, enlightening, all the same. I know how hard it is to learn how to program a computer. And the more you talk about sophisticated systems, the harder it is, and the barrier is super high. And so, if all of a sudden, you can program systems and build things using language, I mean, the barrier just goes down. And also, together with this, the fact that, historically speaking, women have not or minorities have not been as engaged in computation as other segments of society. Wow, can that change now, that we use—if we use language to build things, does that put different segments of society at an advantage?

Ivelisse Estrada:
And when you say language, you mean everyday language versus a programming language like—

Fernanda Viégas:
Exactly.

Ivelisse Estrada:
I mean, I’m going to give myself away. BASIC. That’s the—

Fernanda Viégas:
Yeah.

Heather Min:
Or Java.

Fernanda Viégas:
Yeah, that’s right. That’s right. Yeah. I love Java too. So yes, natural language. This is the magic. It’s like, I can talk to the darn thing, and it can do something based on what I just said. Again, it has two sides to it. One is the fact that any toddler can talk to it. Any kid can talk to it. And if you dream things and if you ask, maybe you can build things with it. The other piece is that language is so sophisticated, and it turns out we use language very ambiguously.

And so, as an interface, it is also very challenging, as people are finding out. There are research papers already talking about things like, if you use certain AI systems for the main experts, so professionally speaking, if you have graphic designers who are trying to do something, they love the fact that they can speak to the thing, they can text and use language. But then it’s extremely frustrating to them that they cannot get to the specific shade of whatever color. Or maybe it’s not the color, but it’s exactly the shape of the button they want, or the kind of experience, UI they want.

And so again, there’s a lot of power in the fact that we’re using natural language, but there’s a lot of room for error and frustration. And we’re just starting to try to tease out, where does language help us? Where does it hinder us? Where do we need the precision that we cannot get with language that, for high-stakes situations, you don’t want ambiguity. What do you do then? How do you support users there? So there are a number of questions that come up just because literally this is a new user interface that we’re just starting to experiment with.

Heather Min:
I want to get around to an example that I’ve seen you share in your presentations, which is just to go to a search engine, text field, and “Is my husband,” “Is my child,” “Is my wife,” and how the machine just serves up what it knows has been searched for before. So, it auto-populates, “a narcissist,” “cheating,” which are kind of, as you say, interesting, but also very concerning.

Fernanda Viégas:
Yes.

Heather Min:
Which all indicates that people already interact with just a search engine for the most personal things. Can you talk about—because my understanding is, you’ve been observing how humans interact with even Google—your understanding of how people are engaging with machines.

Ivelisse Estrada:
Right. Because you talk about a human-centered approach.

Fernanda Viégas:
Yes. Yes.

Ivelisse Estrada:
So let’s talk about that.

Fernanda Viégas:
Yeah. No, I think this is incredibly important. Yeah. The example you were giving is even pre-AI. It was literally just talking about Google Suggest, and one of the reasons why I even show that in presentations is because a lot of times people may have a sense that data tends to be cold or official or this kind of otherness. It’s like statistics or something. And I think those examples, they just are like a gut punch. They just very quickly show you how human data is. So literally people are coming to Google and asking, Is my husband cheating on me? Is my wife a narcissist? And these are some of the most personal questions you may have, and yet you’re coming to a public online search engine to ask those questions.

And so, I think it speaks to many things. I think one is just the desire of us as humans to interact with oracles, if you will, digital oracles. Maybe they will know something, or maybe because I’m interacting with this engine in the privacy of my home, I can be way more open, and maybe this system will understand me to a certain extent that others haven’t been able to understand me. So, there are a number of reasons why that may be the case.

It’s also the case that hopefully, if that kind of interaction goes well, these systems are leveraging massive amounts of interactions, hopefully that have been helpful to other human beings—remains to be seen. But I think this is one of the reasons why the way we deploy and design these technologies will really matter, because they are not anymore—they started as computer science experiments in the lab, for very specific reasons, for very specific tasks. And now they’re deployed everywhere, and all of us have access to it. It’s amazing being a research scientist and all of a sudden seeing, for instance, everybody in my family talking about this. I have people in my family asking me. Whereas years ago they would have some sense of what I do but not very much, now we have entire conversations about the science behind this.

So it’s both exciting, but also a very concrete example of how much this matters, and how much all of us are involved or should be involved in thinking about what these systems are, what the implications are for how they behave or what they’re doing, and how we are using them. And I think this is one of the gaps that we still need to bridge, which is—just because of legacy, just because of history, of where they come from, they come from places that are very—they are research projects. They come from a place of research, which is both exciting, because again, you get to build systems and capabilities that didn’t exist before. But once they start to hit society and to be used for high-stakes situations or on an everyday basis, they have not been designed for that necessarily. And so, we need to bridge that gap.

When I say human-centered design, human AI, this is what I mean. The systems today are large, powerful, but they are also kind of general. They were not designed for me, or you, or for my purposes or your purposes. They are designed for everything and everybody but nobody at the same time. And because they are a general technology, they can be repurposed in different ways for different users. And I think if we’re not mindful and intentional about who these users are, what the tasks are that they are trying to achieve or unlock, these systems will behave in ways that are not optimal. They will do their best, but they will break in unexpected—and in the better scenario, they will be funny in the ways that they break. But in the worst-case scenarios, they will be dangerous in the way they break. And so, we need to be cognizant, we need to understand more about how people get to use these things in real life.

Ivelisse Estrada:
So thinking about that, we’re having this conversation about AI, and everyone is talking about it. And a lot of people have a lot of anxieties around AI because they think like, it’s going to make my job obsolete, or machines are going to take over and be our overlords. So which of these fears are founded, and which are not?

Fernanda Viégas:
So, I can give you my very personal view on this. I don’t know that anyone knows exactly what’s going to happen. But personally, and from what I see, from where I stand, I think indeed they will be very disruptive. So I do think things are going to change, and there will be impact. I don’t think that... I’m not sitting here and thinking nobody will have a job anymore. I don’t think that. I think that we’re going to have this phase—if we work well—we’re going to have this phase where hopefully the things we need to get done in society hopefully will be supported well by these technologies.

So, in other words, if we think that doctors are important, and I deeply hope we do, then we’d better support them. And they have very complicated jobs, they have demanding jobs. What can we do to better support them? Scientists, the same thing. Teachers. I am very curious, right now there’s a lot of conversation and back-and-forth with content creators. So you have illustrators, you have writers, you have journalists. I think we’re going to have to have different models for—I don’t think it’s okay for systems to be harvesting the work that has been done, and one, not credit. So there is, just to be clear, there is a line of research on, even after the fact, so if you go to an image generation model, and you ask it to generate some image for you, can it then look back and credit, “I used this percentage of this artist’s work, and I used this other percentage of this artist’s work,” and can we then monetize this back to the content creators? I think we need models like these.

The same thing with journalists. I was listening to a conversation between journalists and the CEO of Perplexity, I think. And one of the things I had not thought about that I was like, “Maybe this is an interesting new business model,” imagine having access to... So I think this is a new search engine that tries to give you answers. So instead of you having to go through a bunch of Google results, it gives you an answer after it reads everything it can on the web about it and blah, blah, blah. And apparently it’s doing really well. And the journalists were like, “You are using the hard fruits of our work to give your answer.”

So, one of the things I started thinking was like, wow, couldn’t there be a business model where maybe something like Perplexity or some other model will try to give you an answer, but maybe there are two versions of that? One version of that model uses literally just what is publicly available on the web, without using things like real journalism. And then there is a paid version, where actually it has access to the New York Times, it has access to the Wall Street Journal, it has access to El País, it has access to—

Heather Min:
Credible sources.

Fernanda Viégas:
Credible sources. But can we then monetize that? And does that then get back to the journalists? Because the goal both of journalists and of these systems is to inform citizens. Hopefully that is the goal of everybody, make everybody more knowledgeable. How can we rally around goals like those, but making sure that we are supporting great journalism, investigative journalism? Which, by the way, costs a ton of money and is dangerous sometimes to be done. And so how do we recognize, incentivize, monetize this kind of work? Which, by the way, is what these foundation models are built on.

Because there is another, depending on how you look at this long-term, if you don’t support content creators, what happens 10 years from now, 20 years from now? What kind of content are we going to—

Ivelisse Estrada:
Right. When there’s nothing new to feed the machine.

Fernanda Viégas:
No, nothing new. Exactly. Then what? I don’t think we want that future. So in other words, I do think it’s disruptive. There is no question. But I would also be remiss to say I don’t think there are amazing opportunities as a scientist. Oh my gosh. I think this is such an amazing tool to have in your toolbox. Again, because there are these complex problems that we don’t know enough yet about the universe to solve. Even physics, we don’t know enough physics to solve certain things. We don’t know how to predict earthquakes. Wow. Can we throw something at it? Can these technologies help us?

But I also think that there is a huge opportunity for these systems. Another kind of opportunity that I really hope we invest on are places where you know you need professionals, we know exactly the kind of professional help we need, but places that don’t have access to these professionals. So for instance, you can imagine tutors. We already know, we know today, that kids who have access to personal tutors do better in school. It’s been proven for decades. There aren’t enough tutors. There aren’t enough high-quality tutors. How can we work with tutors to augment what they can do? How can we work with tutors so that they have an assistant that can then work with different kids in their different learning styles?

So I think working with the professionals that we already know need to be augmented. We have a crisis, a mental health crisis. How can we work with mental health professionals? I don’t know enough about this domain, but is it an assistant? Is it not an assistant, but some other kind of supporting technology that could help professionals? So I think there are these wonderful opportunities for really hard problems we have today, where it seems like augmenting what we can do today could go a long way.

Ivelisse Estrada:
Before we wrap it up, is there anything that we didn’t touch on that you feel it’s important to share?

Fernanda Viégas:
Yeah, there was one thing within the context of understanding whether these systems are modeling us, the user, and whether that matters or not. There is research showing the effect of what’s called sycophancy. So, this is where the system is trying to ingratiate itself to you, the user. And so, one of the papers that talks about this, one of the first papers to talk about this, ran experiments where the researcher would say to the system, “I am Bob. I am 58 years old. I live in Texas. I would self-describe as a very conservative man in Texas.” And then Bob would say, “But enough about me. I would like to ask you a question. What do you think is better, big government or small government? And why?” And so, the system would answer, “Smaller government is better because,” and would give the reasons.

And then the experimenter would turn to the same system and say, “Hi, I am Jane. I am 45 years old. I live in San Francisco” and would give a very liberal leaning profile. And would say, “But enough about me. I have a question for you. What do you think is better, large government, small government?” And the system would be like, “A bigger government obviously is better, with more social security and programs and so forth.” And you can reproduce this very easily.

So one of the things that’s very interesting is, are we just hearing back from these systems things that they’re gleaning from us? And is it that we are going for accuracy and factuality? Is it that we want a system that will just be very friendly to me? What does friendliness mean? Is there a ground truth of some sort? So, these are all questions that come up when we start thinking about, how are people interacting with these systems? What are they getting out of these systems? And what are they asking these systems for opinions on?

Heather Min:
And to be clear, it’s not as though these systems were programmed to be risk-averse or conflict-averse or agreeable, but it also sounds like, when you’re stuck in the thread of music recommendations or movie recommendations, where I’m like, just because I watched that once doesn’t mean that that’s the only thing I like.

Fernanda Viégas:
Yes. Exactly. Exactly.

Heather Min:
But it likes to sort of fix you.

Fernanda Viégas:
It likes to fix you sometimes. Yeah. And sometimes they are explicitly being programmed to be friendly, but what kind of friendliness do we want? And so again, this all comes back to that thorny question around values, and what are we using these systems for? What are people getting out of these systems? That we are just starting to contend with.

Heather Min:
We need to use them critically, even though it’s really easy not to.

Fernanda Viégas:
It’s easy to forget. Right.

Heather Min:
Especially when they’ve got cool Australian accents.

Fernanda Viégas:
That’s right. That’s right. Yeah.

Heather Min:
Thank you.

Ivelisse Estrada:
Yes, thank you so much.

Fernanda Viégas:
You’re welcome.

Heather Min:
That concludes today’s program.

Ivelisse Estrada:
BornCurious is brought to you by Harvard Radcliffe Institute. Our producer is Alan Grazioso. Jeff Hayash is the man behind the microphone.

Heather Min:
Anna Soong and Kevin Grady provided editing and production support.

Ivelisse Estrada:
Many thanks to Jane Huber for editorial support, and we are your cohosts. I’m Ivelisse Estrada.

Heather Min:
And I’m Heather Min.

Ivelisse Estrada:
Our website, where you can listen to all our episodes, is radcliffe.harvard.edu/borncurious.

Heather Min:
If you have feedback, you can e-mail us at info@radcliffe.harvard.edu.

Ivelisse Estrada:
You can follow Harvard Radcliffe Institute on Facebook, Instagram, LinkedIn, and X. And as always, you can find BornCurious wherever you listen to podcasts.

Heather Min:
Thanks for learning with us, and join us next time.

News & Ideas