With Stephanie Georges and Kate Lilienthal
TRANSCRIPT
Stephanie Georges: Hi everyone. I’m Stephanie Georges, and I’m delighted to be here with Kate Lilienthal in conversation with Henna Karna on behalf of The Meraki Dignity Project. I’ll hand it over to Kate.
Kate Lilienthal: This is a very exciting moment. We’re speaking with Henna Karna, a behavioral economist with a long and illustrious career at Google. She earned her PhD from MIT, focusing on identity, digital identity, and cybersecurity. She’s also a dear friend, and we love her.
Henna, we talk a lot about humanizing AI and what that means. But I don’t hear much discussion about womanizing AI. Are there particular traits or ways AI is being developed with women in mind—either female traits or female users? How should women be thinking about AI?
Henna Karna: That’s such a great question.
AI is an incredible reflector. It’s a megaphone. Whatever we put into it—both the content and the tone—is what it amplifies back to us.
But AI can also do something else. In its effort to aggregate information and summarize things—which it does beautifully—it can minimize nuance. AI connects dots, summarizes data, and generalizes patterns. But in the process of generalizing, it can flatten the subtleties in how people—women included—may see the world differently.
Nuance is one of our most powerful assets, and aggregation tends to reduce nuance.
So AI alone will not solve this problem. The information inside it is built to generalize. If we want nuance, we have to actively ask for it.
For example, we might ask AI:
Tell me this story as a fifth grader.
Tell me this story as a college senior.
Tell me this story as a woman working at Dunkin’ Donuts.
Tell me this story as a woman working in a manufacturing plant.
Those different perspectives must be explicitly requested—just as we would ask different people for their viewpoints. The difference with AI is that we can keep asking without worrying about burdening someone.
So we shouldn’t assume AI automatically brings these lenses with it. It doesn’t—at least not today.
Another challenge is that AI tends to present everything in a very polished, positive way. Let me give you an example.
Recently I was on a call with a group of physicians—about 80 percent of the room were men in their fifties, including pharmacologists and oncologists. As the discussion progressed, the summary of the conversation reflected the majority perspective in the room.
The two women present had different insights, but the aggregated takeaway didn’t capture them as strongly. Because I was in the room, I knew to weigh those perspectives more heavily.
But imagine if I had only received an AI-generated report. It would have been beautifully formatted—with charts, summaries, visuals. I might have accepted it as the full story because it looked so polished.
In reality, the summary might not have fully captured the nuance in the room. In that case, I would have outsourced my discerning lens to AI.
So I think we need to ask AI questions like:
What did I miss?
What might this report have overlooked?
What’s the one thing I might regret not knowing two hours from now?
Those are powerful questions for us to ask.
Kate Lilienthal: That’s disturbing. If we already have a gender gap, and AI is moving so fast, it could actually widen that gap instead of closing it. Are people in the industry talking about this? Are there efforts underway to address it?
Henna Karna: There are people working on it, yes. But there are also many other AI concerns—ethics, empathy, safety—that are capturing attention right now, and those discussions are often gender-neutral.
The bigger driver at the moment is speed.
Everyone wants AI for faster, better, cheaper. Organizations want results immediately. If something that used to take three days can now be done in an hour, the expectation becomes that it should take an hour—and that you should move on to the next task.
Let me share an example that illustrates the risk.
A professor once told me about a data model he built during an internship. It took him three months to develop. After completing it, he took the model to a senior oncologist for feedback.
The oncologist told him that although the model looked correct, it missed one variable that experienced physicians knew was critical—something not yet captured in scientific data, but observed through years of treating patients.
Because the professor had spent three months building the model, he was cautious. He double-checked his work. That conversation led to new research, new papers, and a new approach to treating the disease.
Now imagine the same model built with AI.
It might take three hours instead of three months. Would the researcher still seek outside feedback? Maybe not. The speed of the process might discourage the second look.
That’s one of the hidden risks of working too quickly.
Kate Lilienthal: That’s a real peril of speed. When we work too quickly, we miss things. Is the industry developing protocols to address these risks?
Henna Karna: Yes—but probably not enough.
Most current safeguards focus on questions like:
Can we maintain quality while increasing speed?
How do we measure quality?
The gender gap question is important here, because historically gender equity was not always a defined quality metric. If it wasn’t measured before, it won’t automatically appear as a measurement now.
From a business standpoint, the conversation is often about productivity: optimization, efficiency, delivering value to customers faster.
But we already know that diverse leadership teams—particularly teams with women at the table—lead to better outcomes for companies and products.
The question is: How is that insight being integrated into AI systems?
Right now, it’s not as front and center as it could be.
Kate Lilienthal: Is it being captured at all?
Henna Karna: In some cases, yes—but often indirectly.
For example, many AI discussions emphasize empathy, which is a broader concept. That helps move the conversation forward.
But when it comes to gender representation, we still see familiar patterns.
If I ask AI to simulate a CFO sparring partner, it often defaults to a male persona. I sometimes have to ask it to generate a gender-neutral version.
Even then, it might claim neutrality while still reflecting underlying biases.
The challenge is that AI systems are built on existing data. If we want them to recognize different leadership styles or perspectives, we need to teach them those distinctions.
Kate Lilienthal: Let’s shift perspectives. We’ve talked about what AI does to us—but what about what we do with AI? Do women use it differently than men?
Henna Karna: From what I’ve seen, AI usage is generally gender neutral.
However, there are interesting generational differences. Some data suggests that young men are more likely than young women to ask AI about emotional questions.
One possible explanation is that AI offers a private, nonjudgmental environment. Boys may feel more comfortable asking questions there that they might not ask elsewhere.
Because AI is not perceived as human, people may feel freer to explore certain topics. That dynamic creates an interesting kind of neutrality.
Kate Lilienthal: This really is a brave new world. Let’s close with one last question. Where do you see your role in this landscape?
Henna Karna: While I was at Harvard, I finished a book about people skills in the age of AI. The central skill I focus on is empathy.
AI is artificial intelligence—but it is still artificial.
One quote I love is:
You can outsource your role to AI, but you cannot outsource your responsibility.
For me, the goal is to ensure that AI supports a world that is both highly productive and deeply empathetic.
If AI allows us to solve problems faster, then perhaps we can devote more energy to the challenges humanity still hasn’t solved—things like disease, hunger, and inequality.
But that only happens if empathy remains central.
So I often say:
Let’s not build AI without EI—emotional intelligence.
Put EI before AI.
Kate Lilienthal: Henna, your optimism is inspiring. Before we close, we have one final question. What is your Meraki—the thing you pour your whole creative soul into?
Henna Karna: Honestly, I’m still working on that.
Meeting Stephanie and learning about the word Meraki made me reflect deeply. The concept exists in many cultures, but I hadn’t stopped to ask myself that question before.
Do I have a Meraki?
Do I have a position or perspective there?
When I was younger, it might have been painting. I loved drawing and building things with my hands. My first job involved woodworking.
I haven’t done those things in a long time, so perhaps rediscovering that creative side is part of my journey.
For now, it remains a question I’m still answering.
Kate Lilienthal: Your honesty makes me misty-eyed. Thank you for this conversation.
Stephanie Georges: Henna, we’re grateful for you.
Henna Karna: Thank you both.