Interview with April Yoder
Let 's start this conversation with your studies. You first got a Bachelor of Arts at University of Indianapolis, later on a Masters of Arts at University of Arizona and finally a PHD in History at Georgetown University. I think it is interesting mixture, so tell me more about it
My BA is actually from the University of Indianapolis, which is a small comprehensive university in Indiana. I decided to pursue research about Latin America after studying how Latin American poetry reflected the history of the Cold War for my undergraduate thesis. I continued studying Latin American literature and history for my MA in Latin American Studies at the University of Arizona before deciding to focus on history. I chose Arizona in part because it was in Arizona: I was tired of midwest US winters.
I lived in Boston for a few years before starting my PhD at Georgetown, but I was deep into baseball at the time through some personal connections. That led me to study Latin American history through baseball. I kept my focus on the Cold War in studying baseball, but it led me also to think about how citizen action can shape government policies related to industry. One trend I noticed in my research on baseball in the Dominican Republic was a shift toward governments investing in industry rather than people or human capital, or choosing economic development over the political freedom and economic opportunity citizens expected from democratic governments. Seeing similar themes in discussions around artificial intelligence and human choices helped me land on AI for my next research project.
I am particularly curious about the topic you chose for your PHD Dissertation “Pitching Democracy: Baseball and Politics in the Dominican Republic, 1955–1978”
My undergraduate history professor helped me see how popular culture reflects history. In undergrad I used literature as a lens into history. I got away from the theme a bit in my MA research, but my experience with the baseball industry—mostly MLB—made baseball a good target for my PhD research. I knew from earlier work that Dominican citizens had used baseball to contest US occupation in the early Twentieth Century and that the US occupied again in 1965, so I started with that question. I had a hard time finding sources to look closely at that question, but I found a number of editorials and political cartoons during a scouting trip after the first year of my PhD program that showed how Dominicans projected their hopes for democracy onto baseball. More research showed that conflicts often centered on how different groups emphasized the political rights and freedoms of democracy versus economic development, so that became the central question. The dissertation ended in 1978, but further research allowed me extend to roughly 2018.
In these interviews I usually talk to people working in the “Data Field”. And I know for some of my colleagues “Data Field” almost exclusively refers to big volumes of data points used to build models, predictions, products, etc. However I strongly believe qualitative data points are, oftentimes, very relevant to fully unleash the value of data. Since you have worked many years on Ethnography and other types of Qualitative Research, I would like you to explain the advantages of these methodologies
I am always impressed with people who can handle huge datasets and create visualizations that help us understand a relationship with a glance. But I also find myself really frustrated with the generalizations and assumptions that go into these kinds of studies because they gloss over the human complexity that interests me. I’ve realized through teaching that what I love about history is being able to point to people, decisions, and contexts that have shaped our world. I revel in the complexity and gray areas. Ethnography and qualitative research, such as digging into primary sources and details about contexts and decisions, allows us to weave narratives about the past that can draw us in and help us see the continuities as well as the changes. Both help us see decisions we might make differently.
A cartoon I randomly found online—it looks like a New Yorker cartoon, but I haven’t found it—plays on the adage that those who don’t know history are doomed to repeat it, but adds that those who do study history are doomed to stand helplessly while everyone else repeats it. That’s a little simplistic about what history shows us, but we need the details and specificity to gain any understanding of what happened and why.
And how much has Qualitative Research changed in the last 20 years?
I don’t have a good sense of this on a larger scale, but I’ve enjoyed the greater access to resources through digitization. I love that I can access publications from all over the world from my computer. When I was doing my dissertation research I was able to read some Dominican newspapers on microfilm at the LIbrary of Congress. And by the time I completed my research in the Dominican Republic, I was able to download years of newspapers onto a thumb drive.
Still, reading and digesting all this information takes time, so I was limited by that. The downside is that having so many sources available makes it even harder to stop reading and start writing. Another potential downside is that it could lead to fewer research trips. Over the five years or so I was doing research for my book, I spent almost 2 years in the Dominican Republic. That experience living in the country, going to the archives, watching baseball games, hearing the election parades, and talking to people informed my research and added a more human element to my work. I would hate to see that replaced by technology. In that way maybe the digital divide may actually benefit some research.
As a person passionate about the potential of Research, I am worried about this risk: AI generated bots being used to generate fake answers for online surveys. What’s your perspective on this?
The risk is certainly scary, but I think we can come up with different ways of doing research. Online surveys are certainly convenient and can allow us to gain the numbers to get some quantitative insights. But surveys are also limited by the questions, options, and distance of human interaction. My historian and qualitative bias is showing here, but what if we talk to people instead? It takes longer, but as we spend more and more of our time interacting with bots or AI agents or whatever, that human interaction may become invaluable.
As part of your experience at University of New Haven you have used research insights from surveys and interviews to drove user satisfaction, enablement, and retention. Could you share some examples?
These UX insights have all been rooted in my teaching, so it’s kind of secondary to my academic research. I often implement changes to my assessments and assignments based on the outcomes of student course ratings, how students perform on assignments, and conversations we have in class. I haven’t been super systematic with this lately, so it doesn’t really count as research. Still, one problem I’ve been working on lately—and I think all educators are dealing with this—is guiding them to use generative AI tools effectively and responsibly. I started by giving them the freedom to experiment but emphasizing that they were responsible for the accuracy and human-ness of the output. And ensuring that GenAI doesn’t replace them. That didn’t work out great, mostly because they turned in subpar work. So I’ve been doing more work to show them what I expect and suggesting where AI may be helpful and not. Based on what I saw this semester, I’m going to go back to the basics of what evidence looks like and create templates for students to report their research findings by starting with the evidence rather than the product.
What would be your 3 DO’s and 3 DONT’s when starting a Research Project?
DO familiarize yourself with what’s been done before, whether that’s a literature review or desk research from industry archives, because that’s the first step to understanding the problem.
DON’T read too much before you start writing, whether that’s in the lit review stage or while you’re collecting data. It’s hard to figure out when to stop reading and start writing, or when to stop reading even if you’re already writing, but you have to accept that you can’t ready everything. I think I was able to write my dissertation as efficiently as I did because my advisor told me near the end of my longest research trip to stop reading (especially secondary stuff) and start outlining. That got me out of the reading stage and forced me to make connections and think about the significance of what I had found.
DO have a plan for and stay on top of data collection and recruitment — I failed at this when I tried to formalize a research project on my teaching. I was trying to minimize students’ work, but didn’t remind them enough to take the surveys!
Related DON’T = Don’t try to do too much at once. Scope the research for your objectives, questions, and schedule. I failed in part because I was trying to do research while teaching 4 classes, including a new one.
DO start with a research question, even if it’s broad to start, but be open to change. We don’t always necessarily know what we’re looking for, but we have to start with a question or we’ll try to read everything.
DON’T leave citations or quotation marks out of your notes with a plan to go back and find citations.
And how do you recommend to cope with issues such as bias, significance of sample size. I do not refer on the technical side but when dealing with Data-Driven profiles
I don’t really deal with this issue in my work because I’m very qualitative in my approach. Bias is always a concern because so much of my work is interpreting and finding connections in sources. I address it by considering the context and being open about other possibilities.
In 2023 you started to collaborate with the “Center for AI and Digital Policy”. First of all, could you explain what the purpose of the organization is?
Center for AI and Digital Policy (CAIDP) is a civil society organization that consults with governments, international organizations, and other organizations to guide the development of AI and other digital technologies to “”promote a better society, more fair, more just—a world where technology promotes broad social inclusion based on fundamental rights, democratic institutions, and the rule of law.”
CAIDP has a few different projects and approaches, but the two projects I’ve been involved with are the AI Policy Clinics and the annual publication of the Artificial Intelligence and Democratic Values Index (AIDV). The AIDV offers a snapshot and historical context on the status of AI governance for 80 countries. It includes qualitative reports on the countries and quantitative rankings for their alignment with human rights and democratic values.
Which projects are you directly working on?
Last year I completed the AI Policy Clinic and the Advanced AI Policy Clinic. The AI Policy Clinic was great for exposing me to international frameworks to govern AI and privacy and some of the main concerns in AI governance. It also was my formal introduction to policy research and engagement. As a Research Group member in the AI Policy Clinic, I found AI Policy news, public voice opportunities to consult on AI policy, and contributed to the AIDV by finding news to update the Japan country report with my team. For the Advanced Clinic, I served as a Team Lead for the RG members completing the AI Policy Clinic. I evaluated their work and supported their updates to the AIDV. To get certified after both clinics, I passed a test.
In September, I started contributing to CAIDP’s mission as a Policy Group member on the G7/G20 team. PG members are the ones to draft the comments and statements CAIDP publishes and submits to governments, international organizations, and other bodies seeking feedback on their AI policies. My team commented on Australia’s proposal for Mandatory Guidelines for AI in High-Risk Settings, supporting the principles they defined, recommending red lines or prohibited uses of AI systems, and providing feedback on the proposed guardrails. We also wrote statements to G20 and G7 in relation to their reports on public use of AI, responded to Saudi Arabia’s guidance on deep fakes, and Canada’s consultation on AI in public administration.
Since November, I’ve taken on the role of Associate Editor for the AIDV. I’m charged with reviewing and integrating the updates the RG members found on their assigned country reports. This project will take me through April 2025.
What do you think about the European “AI Act”?
It’s a bit too soon to tell how the European AI Act will work out, but I admire the achievement of comprehensive legislation to guide the development of AI. The AIA provides a framework for thinking about risk and specifies systems or uses that present unacceptable risks. We need that. Organizations like CLAD (the Centro Latinamericano de Administración para el Desarrollo) adopted elements of the framework in their Charter on AI in Civil Administration, so the Brussels effect is working there.
I haven’t analyzed the EU AIA closely, aside from reading about the prohibited uses for a consultation I did with the CAIDP Policy Group on Australia. I know there are other issues with it: Joanna Bryson (I think) wrote a LinkedIn post about her concerns that the focus on the AIA might inhibit the product liability laws that intersect. Personally, I would be more comfortable with differentiation among systems focused on uses, applications, or capability rather than compute (or FLOPS) as in the EU AIA. Principles and use-based approaches provide more flexibility in responding to new innovations and technological achievements.
Some companies claim the EU is over-regulating and even hindering innovation. For instance, very recently, Mark Zuckerberg and Daniel Ek wrote an open letter on why Europe should embrace open-source AI . What do you think about it?
The legislation-vs-innovation framing presents a false dichotomy and is overly simplistic. There’s more at play than regulation = restriction. It feels very Chicago-schooly to me. We can’t rely on consumers or “the market” to curb AI applications that cause real harm to a lot of people. Consumers don’t always make the best or most responsible decisions, and usually don’t know or understand what goes into the systems or products. Especially with tech, much of our consumption is automatic rather than intentional. Zuckerberg and Ek’s complaint that legislation moves slowly and tech moves fast uses FOMO to promote the move fast and break things ideology over protecting people and their rights. It’s governments’ job to do that. For AI, whether it’s the data we use to train models or what we use the models to do, we need to move slowly and make sure we do it the right way.
Regulations provide a framework and guidance for innovators to know what they can and cannot do. Court and agency decisions will provide guidance on how comprehensive AI laws interact with liability laws and things, but we have to start somewhere. Joanna Bryson wrote a post about this recently. LeCunn and Bryson show how anti-trust regulations and investment play into this, too.
We need regulation for AI and clarity around where and how existing laws and regulations apply to AI products and the effects of AI systems. There are some things we should not do, and there are some things that we have to be careful to do the right way. Virginia Dignum’s book Responsible Artificial Intelligence does a nice job of responding to this question by showing all the human choices that go into developing AI systems.
Why do you think Responsible AI / ML should be relevant for a business?
Any business or company that wants to use AI/ML should ensure they do so responsibly. Procurement of any kind of technological system or tool requires due diligence, right? The Responsible AI approach for me is about good AI governance and being conscious of and intentional about the decisions that go into developing and using the systems—and deciding whether a system should be developed. Is about making sure businesses are developing tools that solve their problems and can be managed with the resources they have (or they get the resources they need).
Responsible AI (RAI) is good business practice to protect reputations and investments. Alison Taylor’s book HIgher Ground: How Business Can Do the Right Thing in a Turbulent World isn’t really about AI, but the principle she promotes apply. She argues that business leaders should operate from their First Principles—what their company is about—and center human rights. Do no harm is a pretty good minimum, and I see Responsible AI as part of that.
Looking ahead, what are the biggest changes we will see in the responsible AI field in the next 5 years?
My cynical answer is that RAI will become a compliance requirement and something that companies do to complete a checklist or something rather than upholding values and the spirit of RAI. Many of the jobs I see want people with technical and data science experience and I think many companies see this as a technical role rather than something more strategic and human. Some of the roles ask for philosophers, and ethicists have been in the trenches since the beginning. A more hopeful answer is that RAI will be a robust, truly interdisciplinary field. It’s harder to measure, but brings value by guiding companies to invest in systems that work for the specific needs of the business. They add value to the customer experience while protecting privacy and fairness. In government, it means that we discontinue or never develop (take away $$) systems or applications that violate human rights and undermine trust in government.
To finish the conversation, I would like to know your perspective about Gender gap in STEM. According to recent statistics, it remains significant with women making up only 28% of the STEM workforce. Why do you think this is happening?
I don’t have experience in the STEM workforce but I’ve read a decent amount about this and the gender gap in pay and other areas of society. I think it comes down to systems and societies built around the male breadwinner model that are only beginning to accommodate the dual-income families and desire for more equitable household labor divisions. To be a top scientist requires ongoing study and tinkering, but it can be hard to sustain that focus and commit the time when you have little ones at home.
I think the biases we see in AI systems also reflect this division and the choices that are made about what gets developed. Following Cindy Gallop on LinkedIn has exposed me to the investment gaps for female founders in general but especially in tech. The kinds of problems we solve with tech—or see as being worthy of investing in—can also keep women and girls out of STEM. If they’re told that women’s health, for example, isn’t a worthy research topic (or if they can’t get funding for a menopause study) they might leave the field.
I have a daughter and see more and more representations of women and girls in STEM, so that’s a good start, but they also contend with other media representations of women. I read somewhere that social media has further homogenized our ideas of beauty now, so those expectations and the work and time needed for girls and women to meet those expectations is a factor.
And what can we do about it?
I think men taking on more of the domestic labor and being conscious of what they leave for their partners to take care of is a huge first step. I didn’t realize how much time I used to have until I had a kid to fill in all those little spaces in my day that I took for granted. My husband and I recently implemented the Fair Play system (from Eve Rodsky’s book) and I’m actually writing this during a slot of time I claimed on Sunday mornings to do my work. If I didn’t leave the house to take this time, I would be cleaning something or entertaining my daughter. Now, he’s in charge of that time.
Men speaking up and being open about their family or caretaking commitments is another huge step. In an interview I had this summer, the hiring manager mentioned that he enjoyed working from home a couple days a week because it allowed him to do dropoff for his daughters went a long way to showing that the company was family-friendly.
Getting more women into positions where they sign the checks and make decisions is another way. Caroline Criado Perez’s book Invisible Women reveals how society is set up on the old model and ways that limits opportunities and access for women. The book focuses on data gaps—ways women’s time and needs get obscured in data—and what it shows for me is how old biases get perpetuated. So many of our systems and decisions are based on faulty assumptions about what would be best for people or how to solve problems. It helped me imagine a different kind of world and rethink what I need, but I think the issue is having people asking the right questions, and we need more women in power to do this. For example, Criado Perez found that crash test dummies meant to represent women are often just smaller men, despite women’s typically different distribution of weight. This means that things like headrests and seatbelts are designed to fit and protect men and may even harm women (or are more likely to harm women than men). What if we had women designing the dummies and being expected to make them for other women versus a male-default human?
None of this is STEM-specific, but I think these, along with leaders who are committed to eliminating harassment and trusting people to do work with the flexibility they need, will bring us closer to equity.