
Dr. Rebecca Dekker – 00:00:00:
Hi everyone on today’s podcast we’re going to talk about EBB’s stance on the use of AI in our workplace. Welcome to the Evidence Based Birth® Podcast. My name is Rebecca Dekker, and I’m a nurse with my PhD and the founder of Evidence Based Birth®. Join me each week as we work together to get evidence-based information into the hands of families and professionals around the world. As a reminder, this information is not medical advice. See ebbirth.com/disclaimer for more details. Hi everyone, and welcome to today’s episode of the Evidence Based Birth® Podcast. So today we are going to talk a little bit about a topic you might not have expected to hear from us about. We’re going to talk about AI and Evidence Based Birth®’s stance on the use of AI in our company.
So in the past two years, we have seen an explosion of the use in AI, both by the general public, but we’re also hearing about increases in usage by birth workers and by pregnant expecting families. And in general, AI has been growing markedly over the past several years. There was a recent white paper released by researchers at the National Bureau of Economic Research, Harvard, and OpenAI. And in that study, they estimated that there are approximately 700 million users of ChatGPT and that around 18 million messages are sent each week by users to AI. Over the past couple of years, use of AI tools such as ChatGPT have changed as users are moving away from just asking technical questions about coding or asking for assistance drafting an email. And instead, it’s shifting towards asking for information in depth, researching topics, asking for guidance on personal issues such as mental health issues, and much more. We’ve also been hearing from birth professionals about how parents are using AI to answer questions that they have about pregnancy and childbirth, and we have witnessed some birth professionals themselves starting to use AI to assist them as they run their small birth businesses and serve their clients. So today, along with my co-host, Dr. Sara Ailshire, we are going to give you a behind-the-scenes look onto how we developed our own AI policy at Evidence Based Birth®. We’re going to share our perspectives on ethics and our decision-making process in terms of how we will or mostly will not use AI in our organization and what you all can expect from us in the future. In case you’re getting anxious about what we’re about to talk about, I can go ahead and give you a sneak peek in that, in general, our stance at EBB is to avoid the use of AI as much as possible. And so that is what we’re going to explain to you today. So here with me, I have Dr. Sara Ailshire. Sara is a member of the EBB research team, and she has come on the podcast many times to share research updates. And as a project that Sara did last year, she researched and drafted the AI policy for EBB’s workplace with input from everyone on Team EBB. So Sara, welcome back.
Dr. Sara Ailshire – 00:03:22:
Thank you for having me back. I’m really looking forward to talking about this with you today.
Dr. Rebecca Dekker – 00:03:26:
So Sara, can you tell me a little bit about why you are interested on kind of taking the lead in this project of researching kind of our options at EBB, and what kind of a policy we might come up with?
Dr. Sara Ailshire – 00:03:37:
Sure. So I come from an education background. I was completing my PhD when I first started working at EBB. I was teaching. ChatGPT launches, I think, at the end of 2022. And in 2023, I was seeing my students using it. They weren’t doing their own writing. In education, people were trying to figure out what they were going to do. And I think a lot of people felt kind of on the back foot. I started to see ChatGPT coming up among birth professionals, people recommending using it for their small businesses. I thought that it made a lot of sense for us to kind of sit down. All together and figure out, were we using AI tools? What did that look like? And did we want to be a little bit more intentional about the direction that we were going to go in, regardless of what that was?
So, what about you? What did your personal journey look like when you encountered AI in your role as CEO? And what did it make you feel like when you were watching all of this AI stuff unfold out in the business world?
Dr. Rebecca Dekker – 00:04:25:
My husband has a close friend who is kind of enmeshed in the tech world, and they were texting back and forth about it. And this is back when you had to have like preview access, like special access. And Dan sent him a few things to query or to ask ChatGPT about birth and newborn procedures, about Evidence Based Birth®, about Vitamin K, some of those things. And the answers were okay. Like, they weren’t great. They had some misinformation in them. But that’s when I realized, because I recognized some of the language in the responses. I was like, “Oh, that’s from EBB. Like they just took that from EBB.” And that’s when I realized right away that these companies were going to scrape the internet and then take our information that myself and other people and other content creators have made and written and use that to train their large language models and then have everyone visit their websites and not come to our websites. This is a problem. They’re going to steal everyone’s content, data, information, train them, and then they’ll have no need, you know, for people like me because people will just go to the Chatbot to talk. In some ways, this is intellectual theft. It’s exploitation. And then we start seeing the problems with energy usage. And I would say almost like people’s cognitive abilities decreasing. And those problems became clearer to me over time. This is going to get worse before it gets better, if it ever does get better. And meanwhile, I felt like an outcast because my colleagues in the field of women-owned businesses, other leaders, and in other industries, had very different feelings about AI. Especially in more male-dominated fields, I was hearing reports that companies were actually requiring all of their employees to use AI, like all the time. And it was so different from what we were doing. The investors and the companies that are building these AI tools, they are primarily owned by men, run by men, and they’re just taking intellectual property from authors, creatives, and they’re using it to train their models without reimbursement and then get rich. And then they turn around and fire as many of their own employees as they can. So it seems like so exploitative and unethical.
And so in 2023, we put a notice in the footer of our website saying that we do not consent to the scraping of our website for these purposes. And then over the next few years, traffic to the evidence-based birth website began to decrease. So in the beginning, we used to have like 50% of our website traffic came from people doing internet searches who would, you know, we were ranked very highly. People would click on our links and then come visit our website and learn more about what we do and the information that we can offer. And now the number of people who visit the evidence-based birth website from doing searches online is very, very small. And that’s because the internet search functions, in my opinion, have started a death spiral. The reason being is that all the information is starting to turn into slop. If you do, you know, a search on a general topic using Google Chrome, for example, many of the top links of the things that you find now are written by AI. Okay. So you’re getting stuff that has, you know, written by machines, not reviewed by humans in many cases. There’s also the Gemini AI results at the top of the Google search, which compile information from a bunch of sources and then provide a couple of links to their supposed references. And the quality of that is very poor, too. So, you know, whenever I try to click on those links, I can’t find the information that they’re supposedly citing using Gemini. And that really picked up in 2025 in terms of people are calling it the death of the internet, which I do believe the quality of what you can find, it used to be what you could find when you would search would depend on what other people recommended and found really helpful. Now you have this situation where the quality of the articles that you find is poor. And then those are the articles that are being fed into and training large language models. So you almost have this like downward spiral of quality because now AI is citing AI, which is citing AI with no human experts involved anymore.
Because we are an information-based company, like our mission is to create evidence-based content that helps empower families and communities, this is really impacting us. That would be like my emotional journey as the CEO of being like, okay, so things are changing. And that’s my job. Like I’m not going to complain. My job as the CEO and kind of founder at EBB is to notice trends, understand them, and figure out what our direction is going to be, how we’re going to handle them, because nothing ever stays the same. And I know that. And obviously, I do not reject technology. I use technology in a lot of ways, but I was seeing is what… I will say a brief curse word just in case anybody has little kids listening. But there is this concept known as inshittification. There’s a whole book written on it about how the quality of things as business owners, investors, private equity, gets more and more greedy. They try to make more and more money by extracting… firing people, extracting things from people, and the quality just gets worse. It’s like why many of our favorite brands have, you know, we don’t like them now, right? Do you mind just defining AI hallucination and also the different, we haven’t really given the definitions of AI yet. And I know you collected those. Could you read those for us?
Dr. Sara Ailshire – 00:10:39:
So artificial intelligence refers to any technology or machine that can perform tasks that we generally associate with human intelligence. So that includes things like problem solving, decision making, planning, reasoning, creation, among other things. So you think about different AI tools, things that can generate video or can make artwork, can write an email for you. Those are tasks that we traditionally associated with human intelligence, but are now something that you can ask these AI tools to do for you instead. Weak AI or narrow AI is a term that refers to systems that can perform complicated but specific tasks. Like analyzing data sets, identifying patterns, creating text or images, or making predictions. And this is what all existing AI tools fall under. In comparison to another concept of artificial general intelligence. And this refers to what is, at least for now, a hypothetical ability of a machine to demonstrate broad human-level intelligence. Think about the computer from Star Trek or Jeeves, I think is that was called in Iron Man? You know, a robot or artificial intelligence that is just agentive. It’s intelligent in the way that a person is. It doesn’t require prompts. It isn’t reliant on sort of like, you know, routines of pattern recognition, it’s genuine intelligence. And we’re not there yet for now. Another term that you might hear thrown around is generative artificial intelligence, and that refers to artificial tool, artificial intelligence tools that use algorithms to create content like text, images, audio, or video. And they create those things based on patterns found in the training data and they use those patterns to respond to prompts written by human beings. It’s not necessarily writing your email and like thinking about your email it’s just writing an email based on patterns of the billions possibly of emails that it was trained on to help it be able to fulfill that task.
A large language model, or LLM, is a giant statistical prediction machine that repeatedly predicts the next word in a sequence. And these are things that are trained on huge amounts of human generated data, including the data from EBB. Large language models are what allow human beings to communicate naturally. So through prompts rather than writing code with a machine. You don’t have to write Python. You can just, you know, type it in and ask the AI tool of your choice, Gemini, Claude, ChatGPT, among others, to do what you want it to do, and that’s because of the large language model. And finally, hallucinations. And this is a big issue for us. This can be a big concern for at least my work as a researcher. Hallucinations are when AI generates data that includes responses that are incorrect, misleading, or false, but they’re presented as being true or factual. And my first encounter with a hallucination was back when I was an instructor. I assigned students a paper on a kind of obscure ethnographic essay. And I think that the AI tool that the student used had not yet encountered that AI that anthropology essay in particular. And it just made up a bunch of stuff. And I could catch it, though, because, you know, I knew that essay pretty well. I assigned it. I was able to deal with that. But something that I’ve learned more about in those years since. Is that you’re now seeing academic articles being retracted from journals because of undisclosed AI use or because the AI tool used hallucinated sources or had hallucinations where the source wasn’t fake, but they attributed something to a real peer-reviewed journal article that was not actually in that article. So he’s like, made-up citations or made-up attributions. This is a real problem in research.
And that’s another thing that is something that we need to take into consideration. So much of what we do is share the evidence with people. Then one of the most important things that we do at Team EBB is share the evidence on pregnancy and childbirth with, you know our audience and, when that’s your job, it is really concerning to hear that some of that evidence can be calling the question if AI tools were used irresponsibly. So in addition to these concerns from academic research, we’re also seeing a real explosion in what I think you can call AI slop articles. They’re published online in order to get traffic, clicks, people to view ads so people can make money. These articles are never or maybe not necessarily reviewed by a person for accuracy. They’re not generated by a person who would know what to look for, who would be able to catch a hallucination, for example. What happens if a person reads something like that looking for information about pregnancy and childbirth? And, you know, something that really bothers me a lot when we started talking about AI, you and I, Rebecca, I was really excited to do research and work together with you and with everybody on the team to develop a basic framework that we could use. Personally, I don’t use AI tools. I worry about its effects on people and on the environment. But avoiding something because it worries you doesn’t do a lot of good. I’m a worrier and I know firsthand that avoiding something doesn’t make it go away. So I wanted to kind of do what I could to learn more, reading academic research on the effects of AI, reading pieces from business articles and from law articles about use of AI in those fields, and also some narrative AI forecasts from experts in the field. You know, I might be very skeptical. People who work with AI are extremely excited. And I thought it was important to see what they had to say and try to understand where they’re coming from a little bit better, too.
So, when we began developing our AI policy at EBB, we had some ethical concerns, and we wanted to sort of lay those out for our audience to sort of help them understand our decision making. So, from all of that, we developed this list of our main ethical concerns. So could you introduce to our listeners the first one of those ethical concerns that we discussed as a team?
Dr. Rebecca Dekker – 00:17:03:
You know, talking about ethics is something that I enjoy doing. And I remember when we published the research on newborn male circumcision, we called it, it was our first article where we said the evidence and ethics on the topic, because it was more than just research on health benefits and risks. There were actual ethical issues that we needed to talk about. And I feel like this is similar. So we have threats to the accuracy of information, environmental threats, privacy concerns, intellectual property concerns, humanitarian and public health concerns. I want to start off by talking about how the main role at Evidence Based Birth® is to publish accurate, accessible, and inclusive research that empowers communities. And because of the risk of hallucination that you talked about, AI tools can potentially threaten the accuracy and quality of information, especially if we relied on them here at EBB. There are also other inherent biases that are built into these large language models. When you’re talking with a chat bot, I mean, just think of it. It’s similar to a systematic review of research. What was the research that they put into that study, right? Garbage in, garbage out. And who were these models made by, trained by? Whose perspectives did they not include? And then what guidelines or parameters did they put on these language models? So in many cases, I have heard from birth workers and parents that when they do a search for something, how they frame the search matters. Because in many of these generative AI platforms, they’ll get you back what they think you want to hear. And they may give you a biased answer. And so we have the risk of hallucination and we have the biases that may be built into these tools. For birthing families, for healthcare professionals, for birth workers, there could be serious real-world consequences if you put inaccurate or misleading information from an AI tool into practice. Like we’re talking about your life, your pregnancy, your birth, your child’s life. On the other hand, like I don’t want to neglect the fact that AI tools could be used in strategic ways to make information more accessible. And one way that I’ve seen it help even before these Generative AI platforms became popular is through live transcriptions, automatic closed captions, and translation tools. This can make information much more accessible for the hearing impaired. For people who speak languages other than English, for people who learn better through different methods. Perhaps, for example, by having something read to them instead of them having to read it themselves. So that’s kind of the first concern we have is and we’re still seeing research come out on this. Like, what is the accuracy of information that people get from AI? Sara, do you want to talk about the next ethical concern?
Dr. Sara Ailshire – 00:20:14:
Yeah, absolutely. So one of my primary ethical concerns at the very beginning of this, before I knew much else, was the environmental impact that the data centers that make AI possible have. And this is also something I know is a really important priority for Evidence Based Birth®. The data centers that support AI tools require enormous amounts of energy as well as fresh water to function. They need the energy to power the servers and they need the water to keep them cool. For example, the amount of energy it would take to generate one AI cartoon image takes as much energy as it would it to completely charge your cell phone, right? The effects of AI data centers on communities can include increased electricity costs, can include increased exposure to air or noise pollution because of the increased use of fossil fuels such as natural gas, as well as water scarcity. And these issues of environmental degradation and pollution intersect with other issues that are really important to us at EBB, especially the problems of racism and white supremacy. These things can come together to manifest as a form of environmental racism. Community organizations have stated that the construction and expansion of data centers is disproportionately impacting Black communities, other communities of color. Also, rural communities are being asked, in some cases, to shoulder the burden and maybe not seeing very much of a benefit.
Dr. Rebecca Dekker – 00:21:47:
Yeah, I know that here in central Kentucky, where Evidence Based Birth® is located, there is a rural community that is currently under consideration for a data center. And, you know, the community has major concerns. The sound pollution can cause migraines, can cause suffering with it being so loud. I know there were articles about Mississippi about they’re using basically these jet turbines to power one of Elon Musk’s data centers and how people live near their can’t sleep, their dogs are upset. And in many cases, these things were put into place so quickly that the communities didn’t have time to fight back or have a say. And we’re facing in places where these data centers are going to be located, we’re going to face higher electricity costs. And where is the concern about, you know, I mean, we’re always talking about climate change and trying to be good stewards of our environment. And all of a sudden we have to have like massively increase the amount of electricity that our country is using? Like it just, it doesn’t make sense, especially when a lot of what these tools are doing are things that are not necessarily, they don’t necessarily need to be done by tools.
So violation of privacy is another concern that we had here at EBB. So we do not provide or sell customer data to outside organizations. And, you know, we feel really strongly about that. But we are now seeing that a lot of the tools we use, the software we use to run our business, are starting to embed AI into those tools. I mean, this has the first and immediate side effect of often increasing prices of everything that we do. So we have to, from a business perspective, have to figure out, you know, are we going to switch tools or how can we make sure that we’re not charged more or that we turn those features off? But also, every company, software company out there seems to be trying to develop their own AI thing. And so they’re starting to harvest data from their customers, which includes us. So they might add that to the fine print of their terms and conditions that they’re going to be harvesting our data. They might be transferring that data to other companies, so that has been a concern of mine as well. We have to, like, really stay on top of every tool that we use. Like, what is their stance on AI? What are they doing? Are they increasing their prices on us? Are they going to be using our data to train their models? And that has been complicated. So it’s something we have to think about.
Another key issue is sustainability. So on our team, that’s one of our core values as EBB team members. We want to continue serving the public and remain a viable company in the long term. And that also includes remaining profitable so that we can stay in existence. So I mentioned that we put that statement on the footer of our website. It said copying, reproducing, capturing, or scraping for any purpose is prohibited. However, I think it’s quite clear that AI companies, their bots, are scraping our website without our permission. They’re using that information to train, configure, and make a profit. Meanwhile, increasing number of peoples are reading summaries of our research in the AI platforms without ever visiting our website, often because our work is not credited, it’s not cited, or it’s not linked to. And so as a result, this is a big threat to our ability to continue operating because it’s lowering the value of what we do. And so that’s another, you know, big concern we have. Also, when an AI company takes our intellectual property at EBB, uses it to generate AI materials for use in their chatbots or their tools, this removes our control over that information. We have no control over how that model will react or will use that information. And when research updates are put on our website, they might not make it into that model. So we were talking the other day on Team EBB about how sometimes when you ask these Chatbots questions about current events, they’ll gaslight you and say, there is no war happening in this area right now. Or they’ll call Trump former President Trump, and say he’s not president right now because they’re relying on these past training data. So going to birth, it means that the data in these Chatbots that they’re giving you might not be the most up-to-date version. And we have no control over that. They might be citing old EBB information that we’ve already updated, but we have no control over it.
Dr. Sara Ailshire – 00:26:21:
And I know at EBB, we include, we reference lists. So if you’re really curious about something, if you’re using our material, you can go and you can look and you can kind of trace a concept or trace an idea, trace an issue.
Dr. Rebecca Dekker – 00:26:36:
Back to the roots.
Dr. Sara Ailshire – 00:26:37:
Back to the roots, exactly. And you can understand how other researchers have addressed it. You know, what has happened since. If that’s taken away, that context is lost. It kind of limits a person’s ability to, you know, go further with what they’re learning to verify, to see what has happened. You know, it’s a real loss when that context is removed and what we’ve put together kind of gets chunked down into a little summary.
Dr. Rebecca Dekker – 00:27:03:
Yeah, exactly. And like I said, you don’t know where that information came from. Like you said, you can’t trace it back. And, that is where you also get into the problem where it’s starting to cite itself. So it becomes this like circular pattern. Yes. So there’s accuracy issues. So another major concern of mine, as well as the rest of our team, is humanitarian. So I think it’s quite clear that the AI companies and their investors and private equity investors and the billionaires in charge of this industry are developing and deploying these generative AI platforms with the explicit goal of reducing the costs of employing human beings, usually by firing or laying off large numbers of people. In fact, I think just a few days before we recorded this, another software company just announced that they’re cutting their staff by 50%. And their CEO wore a baseball cap that had the word “love” on it, as he announced to the entire team that half of them, you know, like 5,000 people were losing their jobs. And they attributed it to the fact that they can just use AI to do everything. And, you know, I’ve personally known other companies that have laid off their staff because they think they’ll just have AI do the work of their staff. What I believe is happening is that these actions are designed to enrich a few people at the top of these pyramids. They’re exploiting workers by extracting their labor while they can to build these tools and then firing them and then trying to make more money off of selling these tools. And they are completely ignoring the needs of entire families and communities. It’s like the complete opposite of what we stand for at EBB. You know, we are here to help take care of other people, but we also want to take care of ourselves and our families. One of my favorite things about working at EBB and having this organization is that I get to provide a safe and wonderful workplace for people where we can work together and have teamwork and get to know each other’s families. And what would life be like if we were just machines or like just, you know, to me, it doesn’t make any sense, but then I’m not a billionaire, so. I also believe work is a worthwhile function. It’s life-giving and I’m not I’m trying to use an expansive definition of work. Work may be caregiving for your family. It may be going to a physical workplace. It may be volunteer work, but in general I think work gives people a lot of meaning in their lives. So to me, it seems very short-sighted to use AI with the goal of laying people off. And so here at EBB, we are wholeheartedly rejecting that kind of philosophy espoused by billionaires such as Sam Altman, the founder of OpenAI, who believe that generative AI should be used as much as possible to replace the work of human beings. And in all the interviews I’ve seen, they seem to place very little value on human life.
Dr. Sara Ailshire – 00:30:17:
Yeah, you know, I think there’s a lot of joy in the doing. There’s a lot of joy in also reading and writing and doing work. There’s one thing that’s just exciting and enriching about work. I was reading an article recently, there’s something we’re doing for our conference about efforts to reduce the incidence of hepatitis B. And I saw this chart that these researchers had put together about the decline. And I got really emotional about all this work people had done and everything that came together to make this chart that showed that this really serious, terrible disease had a really big decline. You know, and there’s just something very human about that. And I think that the human, I’m anthropologist, so clearly I’m very interested in humans and what they get up to. And, you know, I think… I think when you lose people, when you lose that, I don’t know if you can quantify what that loss is or what that means. You might be making gains in productivity, but not everything that’s worthwhile or worth doing or worth anything is necessarily measurable in that way, if that makes sense.
Dr. Rebecca Dekker – 00:31:24:
Yeah, exactly. And I think. You know, you could go into research on, you know, feminine and masculine perspectives. And it is very much like, a masculine perspective to believe like productivity and efficiency are the end goal. And it’s more of a feminine perspective. I think this is a, in every culture, there’s often these, you know, two perspectives and you can have both, but I think what we bring to the table is this belief in human connection and community. I don’t want to sacrifice that to be more efficient. Does that make sense? And I think if you remove that human component of it. It just makes it cold. And for me, not worth being in a business relationship anymore. So those are just some of the things we’re thinking about. And then, you know, you talked about just being able to use your hands, your mind. I think creativity is a really important skill, intuition, and those are things that we need to keep honing as human beings. We can’t farm that off to a machine because that’s a lot of what makes us human. And so the last ethical concern I’m going to talk about is like the effects of AI function on human cognition and social function. So there is research showing that using generative AI decreases your cognitive function and abilities, including your ability to remember things, so your memory. And this is a really personal topic for me because my mother-in-law died at a young age of Alzheimer’s. And I saw firsthand what happens to you as your cognitive abilities decline. And I really got into reading a lot about Alzheimer’s research and prevention. And one of the things I learned that is a core tenant of Alzheimer’s prevention is that if you don’t use it, you lose it. That’s why it’s so important to continue to socialize, to continue to use your brain, to do things like learn a new language or learn to play an instrument or to keep reading or doing crossword puzzles because using your brain keeps it strong. And what I see happening and what research is starting to show is that as you depend more on AI tools, you lose, or those areas of your cognitive function become less sharp. If you’re over-relying on an AI chatbot, I don’t care which platform you’re using, if you’re over-relying on it for critical thinking. I saw that. That was one CEO was telling me, I use Claude or I use this for my deep thinking. And I’m like, you’re letting a machine do your deep thinking and you’re the CEO of this company? Like, isn’t that your job? Like, you’re increasing your risk of cognitive decline in the future, and we do not have long-term research on this yet. So that is something that I’m sure, you know, we’ll find out more as the years go on. But there’s also a thing we’re seeing with some people becoming, you know, socially, emotionally and cognitively dependent on an AI chatbot to do anything. I don’t know if you’ve seen any evidence of that, Sara.
Dr. Sara Ailshire – 00:34:33:
Yeah, so I’ve read a little bit. There’s a kind of a unique subculture of people who develop friendships or even like romantic relationships with AI chatbots. And when the tool that they have developed this social bond with hasn’t updated or changed like they’re very concerned about losing this partner that they have sort of like developed through like engagement and conversation and things like that, and you know It’s complicated and I do wonder sometimes what is lost if, you know, the most important social and emotional relationship in your life isn’t with another human being.
Dr. Rebecca Dekker – 00:35:14:
Yeah, or if your friends no longer talk to you except through a machine.
Dr. Sara Ailshire – 00:35:15:
Yeah.
Dr. Rebecca Dekker – 00:35:19:
And then I also think about child health and development with AI use. And there’s this really funny story that I heard from my son. He was at this eighth grade night for the local high school marching band. And apparently some eighth grader got the phone number of like a ninth grade girl. And, you know, they were all like, so excited. Oh my gosh, he got her number, you know, and then he texted her and then she wrote him back using AI. Like it was clearly a ChatGPT or something generated response. And so then he used AI to respond back to her. And I’m like, kids don’t even know how to talk or flirt if they’re going to use ChatGPT to do it. It was funny, but also concerning.
Dr. Sara Ailshire – 00:36:01:
Yeah.
Dr. Rebecca Dekker – 00:36:02:
You know? Yeah. So, and my kids talk about this all the time, about how, you know, they’re trying to stop kids from using AI in school, but the teachers are using AI to make assignments.
Dr. Sara Ailshire – 00:36:12:
Yeah.
Dr. Rebecca Dekker – 00:36:13:
And my kids will, they can recognize them immediately. They’re like, this doesn’t seem normal. My kids will run their teacher stuff through an AI checker and find, you know. But, you know, I think these concerns about child health and development, we don’t have a ton of research on them yet. I think we will. I do want to note that as I was looking into evidence on AI’s effects on child development. I found an interview with Sam Altman, the founder of OpenAI, and he was asked in this interview, is AI making children dumber? And I don’t like that phrase, but that’s what they used. And he said, quote, true for some kids, end quote. And the entire audience started laughing. Like they thought it was hilarious. And yet, as we’re recording this, I see news articles that AI companies are beginning to press to have their products installed in schools. They’re beginning to press to have teachers trained in using these tools. And we already know that there are increased problems with cheating and decreased learning. And research has just barely begun on that. And yet there’s this push to make money off of it. So, that’s a lot of ethical concerns we’ve just gotten over.
Dr. Sara Ailshire – 00:37:30:
No, absolutely. And, you know, it’s so hard being EBB and like being so evidence-based that we’re just kind of in the early days of this and there’s so much. I think we will learn later with research, but some of the early research we’ve seen hasn’t been exciting or promising about some of these like social, emotional, intellectual, educational impacts. And of course, you know, we have these anecdotal experiences, our lived experience can also tell us that these things that devalue, things are very important to us, you know, maybe that’s not something that we want to use, or at least if we’re going to do that, we should be super, super purposeful and very careful about how we interface with it. Because we can’t pretend it doesn’t exist, but we’ve got some concerns, you could say.
Dr. Rebecca Dekker – 00:38:20:
Yes, and exactly. That’s what, you know, we, I think that’s what makes us different. And one of the things I enjoy about EBB is we do think through the ethics of something before we adopt it. In terms of practical steps, like team by team, we have three main teams at Evidence Based Birth®. We have our research team, our content team, and our programs team. And Sara, I was wondering if you could talk a little bit about, just briefly, what did we decide about the research team’s use or non-use of generative AI?
Dr. Sara Ailshire – 00:38:54:
Sure. So that’s team I’m on. And at the heart of Evidence Based Birth®, research is the primary function. It’s the core of our offerings to the general public, to our Pro Members, to our educators. So with that in mind, the adoption of generative AI tools could pose a risk to the quality of work that the research team produces. So, for example, if we prompt a generative AI to produce an annotated bibliography, a literature review, an article summary, or some other written product, we could be at risk for an AI hallucination, for inaccuracies, for problems that wouldn’t exist if we had just done that labor ourselves. Everybody in the research team is trained in research. We have the background to be able to understand, to identify problems, to produce those things of a high level of quality ourselves. We distinguish ourselves within the industry with our focus on evidence-based, non-biased communications on the current research on topics related to childbirth. So hallucinations or the potential of bias from generative AI could put the quality of our work and our reputation itself at jeopardy. Another thing that’s important is that the adoption of AI tools could also erode the trust that our audience has in the research team and the work that we produce. Throughout all of EBB’s existence, before my time and now, all research has been conducted by trained academics who have the study, have many years of study, experience in research and scientific communication. And if we were to try to replace that with AI tool use alone, that would erase that legacy. So with those concerns in mind, the research team came to the consensus that there wasn’t an acceptable use for generative AI tools for work that we do. It would make us more efficient. It would add problems and it would be a loss of the quality of the experience of the education that we bring to the work that we do and the things that we create for our audience. So the promise that EBB is making is that we’re not going to use generative AI in a creation or in a publication of the Signature Articles. An emerging issue in research is that, again, that thing I mentioned earlier, the use of AI in peer-reviewed articles that isn’t disclosed and that could pose problems to the quality and the value of the data. So that’s something that we’re keeping an eye on. We’re monitoring current trends in AI in academic research, and we’re trying to stay alert for fraudulent data, for errors, and for big mistakes in journal publications due to the use of generative AI. And if it affects something that we’ve published and we become aware of it, you know, we will certainly address that. You know, we always want to provide you the best information. We continue to go back and improve the articles that we produce. We stand by our work for the long term, and that’s a commitment that, you know, we make to our audience is that we will be thorough and we will be diligent in keeping an eye on these things.
Dr. Rebecca Dekker – 00:42:06:
Yeah, and that was a unanimous consensus on the research team that we won’t use generative AI in our writings and our research. I can talk a little bit about the content team, which really makes up our podcast, graphic design, social media, those sorts of things. We’ve decided that there are a few acceptable use cases for AI tools that can make EBB’s podcasts more accessible. So number one are transcripts of our conversations that can make the content accessible to readers who are deaf or hard of hearing, English language learners, and those who speak languages other than English. So when AI is used to generate a rough draft transcription, it’s always evaluated by our own person who looks for errors and accuracy. If you want to view the human verified transcript, they’re always posted directly on the, the blog page on the Evidence Based Birth® website that goes along with that podcast episode. I do want to note that the automatic transcriptions that you see in Spotify or Apple podcasts are not controlled by us. Those are generated by those companies and they’re not guaranteed to be accurate. Another use case is, you know, a tool that people don’t necessarily think of as AI, but it is, is the AI generated closed caption translations on YouTube videos. So, I really encourage people if they want to translate EBB podcasts into their own language, they can go to YouTube, watch the podcast there, pick the language that they want to have the closed captions with. We are not using though generative AI to generate voice or audio translations at the time. The only voices you will hear from us here at EBB are our own voices recorded in real time. And then the third usage would be a tool that we use to generate timestamps and a rough draft description of the podcast episode. This information is always evaluated and usually substantially edited for content, accuracy, clarity, and tone by several people on our team before we publish it. What you hear, though, on the podcast are real conversations. Any scripts we use or outlines are generated by us. And then in terms of other work like photographs, videos, graphic design, copywriting, the language we put on our website pages, we really value the work of creatives. We value the work that photographers do, birth photographers, videographers, graphic designers, marketing people, other creatives. And for that reason, we are not going to use AI to generate images or videos. We’re not using it to write our social media posts or compose newsletters or draft web pages or edit our photographic or video content. So we have humans who will be doing that. Sara, I know you collaborate a little bit with the programs team. Can you talk about their policy?
Dr. Sara Ailshire – 00:45:01:
Sure, yeah. So our programs team are those who help us with our Pro Members, help make our conference, the childbirth class, and people who take applications for our childbirth instructor program. And as of now, our programs team hasn’t identified any circumstances where they would adopt AI tools as part of their general work. They have encountered the use of AI-generated materials in application materials submitted to EBB for the instructor program and for scholarship-related applications. And what they were finding was that this use of AI was negatively impacting the application materials. They were of a lower quality. And it was evident that these materials weren’t sharing real thoughts or words from the actual applicant and looked really identical or very similar to other AI-generated applications. For people who use AI often, they talk a lot about prompt engineering and knowing how to write a really effective prompt to get the AI to produce a higher quality product. Not everybody knows that or knows how to do that. If you’re giving AI the same object to respond to, you’re getting these legally similar answers. You know, they really want our applicants to be themselves. It doesn’t have to be perfect and immaculately polished, but it should be hopefully real. So as of 2026, all applicants to EBB programs are being instructed to use their own thoughts and their own words when writing their applications. And EBB’s promise is that all applications will be reviewed by one or more human team members. We spend a lot of time reading those and we really take seriously the amount of time that people spend producing that work and we honor it with our careful attention. We also personally respond to all emails sent to us by customers. We don’t use generative AI features in our customer service platform. You’re never going to be stuck talking to a chatbot. A real person reads your email, thinks about it, seeks out an answer, you know, wants to respond to your inquiry or solve your problem. We also just ask that people who attend any of our programs don’t use an AI chatbot or AI tool to record meetings or transcribe the audio at a training. So we just ask people to be mindful and respectful when they work with us of our policies and our approaches.
Dr. Rebecca Dekker – 00:47:32:
I do want to quickly mention hiring. We are not adopting AI tools for hiring. We ask that anybody who applies to a job at Evidence Based Birth® use their own thoughts and words when attending an interview or writing an application. And we promise that all job applications will be screened and reviewed by our human team members. So we are not making you use your own thoughts and words and then grading you with an AI bot.
Dr. Sara Ailshire – 00:47:58:
So Rebecca, as the owner of Evidence Based Birth®, could you talk a little bit about how you’ve navigated managing everybody’s different roles, approaches, and needs when it comes to upholding and developing this policy? Because I know that some people on the team do make use of AI tools and find them really worthwhile, while others maybe don’t use them as much. So, how have you navigated that balance? Can you share a little bit about that?
Dr. Rebecca Dekker – 00:48:23:
Yeah, so one of the things we’ve done is just kind of like made it clear with our internal team, like, some recommendations from us. And one of the biggest things that we teach our team members, and I think you all would benefit from hearing this, is remembering that any data you enter into an AI tool is not guaranteed to be secure. So we tell our employees and contractors that if you’re using generative AI tools on your own, you should never put our EBB company info into non-approved platforms. And we also are, we’ve been disabling any AI features of like third-party platforms that we use here at EBB. In addition to not entering our private company info into AI platforms or tools, we are strongly discouraging our team members or anyone who’s listening from putting your own personal medical, pregnancy, or mental health info or any information that’s private to somebody else into AI platforms. It is not guaranteed to be secure. I don’t care what they tell you. We’ve all lived through different hacks of different platforms. And many AI companies are collecting data that you enter and may use that to try and target you for marketing purposes. Also, I don’t think people realize that AI chatbot data can be subpoenaed in court cases, or it could be unexpectedly distributed to outside organizations or the dark web through a security vulnerability that might not be predictable. So if you’re a healthcare worker, don’t put info about what happened to a client into a chatbot. Same if you are a parent who experienced malpractice or negligence, don’t go to a chat bot and put in things because anything you put in there couldn’t be used against you in a court of law. It can be subpoenaed if you’re in a legal proceeding. So I think it’s important for people to realize that, you know…basically, we tell our people, if you don’t want something on the front page of the internet, don’t put it in a chatbot, right? If you really need to use the chat bot, maybe put some dummy terms or, you know, it’s hard though because people do want to customize these tools and they want the tool to get to know them. Well, then you have to understand that you are giving this company your childhood trauma. Like that information could belong to them now and it’s not guaranteed to be secure. So just be wise and discerning about who you’re giving your information to. Again, I have lived through enough technical security vulnerabilities to know that it can happen to anybody. We are also really encouraging you and our team members at EBB to continue to use your own thinking abilities, your own creativity, your critical analysis, and your intuition in your life. So we value group brainstorming as well and deliberation. So people want to use AI to brainstorm, I understand. But we can still get together and brainstorm as people. And I think our ideas are much more fun, creative, and we enjoy the end result a lot more when it’s something that we came up with as and yeah, that’s kind of it. Those are the policies for our three teams.
Dr. Sara Ailshire – 00:51:36:
I know that we were developing our approach. Like we really wanted to be super flexible and mindful that things like you were saying could change so much. You know, and that we continue to really value that sort of deliberation and that discussion, that iterative process.
Dr. Rebecca Dekker – 00:51:55:
And yeah, I do want to say, you know, Sara helped us create a decision-making tree so that if something, a situation does come up where we need to change a policy or consider the use of AI or a different strategy, we have like an actual deliberation, decision-making, like discussion questions for our team so that as a team, we would go through that issue together and figure it out.
Dr. Sara Ailshire – 00:52:14:
Absolutely. Kind of just echoing what you were saying about the value of the group and brainstorming and thinking together. So with all that in mind, Rebecca, do you want to share the bottom line on AI at Evidence Based Birth®?
Dr. Rebecca Dekker – 00:52:29:
Yes. Okay. So here’s the bottom line. Here at EBB, we have significant concerns about the ethics of generative AI tools, including concerns about the potential for inaccurate or biased information, threats to data privacy, to our own business sustainability, and we have environmental, humanitarian, and cognitive health concerns. So far, our main approach has been and continues to be generally avoiding the use of generative AI at Evidence Based Birth®, especially with the research materials that we are creating for the public. So we have been educating our employees and our contractors about the very few cases that we might use generative AI. Those are specifically with making the podcast more accessible. However, we have not given permission, we never have given permission, for AI companies to scrape our materials, to harvest our intellectual property, to train their AI models. However, we do believe it is happening without our permission. We are going to continue monitoring developments in AI so that we can provide updated guidance to our team. We want to continue prioritizing at Evidence Based Birth® accurate information, the wellness of our team members, group deliberation and brainstorming, human creativity, privacy of our data, and ethics. And, when I think about the big picture, I think about how family creation, which is, you know, what we’re doing at Evidence Based Birth®. We’re helping people as they create their families. And that includes reproduction, pregnancy, birth, postpartum, parenting. These are some of the most inherently human experiences on the planet. And we want to honor that. So our promise to you is that the research work we publish at Evidence-Based Birth will continue to be made by humans for humans. And that’s it. Thanks everyone for listening and, you know, stay updated. We’ll hopefully do another episode at some point about the research on the accuracy of birth information in AI platforms. And till then, I’ll see you later. Bye.
Dr. Sara Ailshire – 00:54:48:
Bye.
Dr. Rebecca Dekker – 00:54:49:
Today’s podcast was brought to you by the Evidence Based Birth® Professional Membership. The free articles and podcasts we provide to the public are supported by our professional membership program at Evidence Based Birth®. Our members are professionals in the childbirth field who are committed to being change agents in their community. Professional members at EBB get access to continuing education courses with up to 23 contact hours, live monthly training sessions, an exclusive library of printer-friendly PDFs to share with your clients, and a supportive community for asking questions and sharing challenges, struggles, and success stories. We offer monthly and annual plans, as well as scholarships for students and for people of color. To learn more, visit ebbirth.com/membership.
Disclaimer: This content was automatically imported from a third-party source via RSS feed. The original source is: https://evidencebasedbirth.com/ebb-394-team-ebbs-stance-on-ai-in-our-workplace/. xn--babytilbehr-pgb.com does not claim ownership of this content. All rights remain with the original publisher.
