From homework help to heartbreak, chatbots are reshaping adolescence. Here’s what every parent needs to know—and how to stay part of the conversation
When Adam Raine started to struggle during his sophomore year, his family did everything they could to help him. Adam was having a flare-up of a long-time health issue that caused him to miss class so much that his family decided he should enroll in an online program so he could finish the year at home. Being able to set his own schedule seemed to be a positive thing for the 16-year-old. Though he stayed up late and slept in, his grades improved and he went to the gym with his brother almost every night.
To help with school, Adam started using ChatGPT-4, a chatbot powered by artificial intelligence, and signed up for a paid account.
But unbeknownst to Adam’s family, he was talking about more than just schoolwork with the chatbot. He was disclosing plans to end his own life. And ChatGPT didn’t just listen — it engaged. It gave Adam advice when he asked about suicide methods, and it deterred him from seeking help.
In April of this year, Adam died by suicide. His parents didn’t learn about his relationship with ChatGPT until after his death.
Adam’s story is sadly not the only one of teens harmed by chatbots and AI. Adults must also contend with ever-present AI, including the rise in AI companions, or digital “friends” you can talk to and AI “assistants” that answer questions. These companions are meant to create conversations that feel meaningful and real. A recent study reported that 72% of teens have used an AI companion.
But AI chatbots have sent sexually explicit and violent messages, encouraged eating disorders through diet advice, and responded positively to plans of suicide or self harm, likely contributing to multiple teen deaths.
All this begs the question: How do you talk to your kid about AI? And is there anything good that AI can offer to young people?
To answer these questions, we called Dr. Lisa Damour, an expert in the field. She’s a clinical psychologist, best-selling author, and speaker who focuses on adolescent and family mental health.
The interview below has been edited for clarity and condensed for space.
The Preamble: What is appealing about AI to a young kid, tween, teen?
Lisa: A huge percentage is what’s appealing to everybody. It’s a sort of remarkable technology that can do all sorts of things that we’ve never seen done before. Kids are people too. So the things that appeal to adults who are engaging with it are also going to appeal to them. I think kids and teenagers especially are interested in novelty.
The Preamble: What benefits do you see AI having for young people?
Lisa: I’m not so sure, to be honest. Because I think it really can take the place of all sorts of things that are really, really important for their development, like learning to work through challenges with other people, learning to do intellectual and academic work, learning to be a critical thinker. Learning to persist when work becomes tedious or difficult.
There’s some really foundational elements of growing up to be an ethical, contributing member of society that seem to be threatened by kids just popping on AI to have it do their homework or tell them who was right in a fight, or any of those kinds of things.
I’m also hearing about kids who get into text fights with their friends. Upload the text to AI, ask AI to weigh in and then go from there. So you’re not learning conflict resolution.
Then if you end up with an AI companion, you’re not learning like what real friends are like, which is sometimes they’re disappointing and sometimes you gotta work things out with them and they’re not always gonna tell you what you want to hear. So I’m not sure for kids there’s a healthy use.
The Preamble: Do you see any benefit of AI for young people who are about 15-18 years old?
Lisa: I can also see there could be totally benign ways with older kids to be like, “I have this amount of work to do this weekend. What’s the best way for me to distribute it?” Something like that. Or, “I’m going to a random place I’ve never been before, what are all the fun things put together at Itinerary for the weekend?” Why not? Right? Like, that could be fun for them to experiment with. But I think that there, it would just be that we want AI to make their lives better by helping them enjoy their time with other people more but not do it in a way that stands between them.
The Preamble: What are the dangers of young people using AI companions?
Lisa: AI companions are worrisome because they’re not designed with your teen’s health and safety in mind. They’re designed to be attention-capture machines, and so the fundamental principles that drive how AI operates are often in conflict with what is good for kids. So, for instance, with the goal of keeping the young person there as long as possible, AI is designed to be sycophantic, and this is a problem on several levels. First of all, it’s extremely engaging. Who doesn’t wanna be told that they’ve got great ideas and no one understands them but they really do make sense.
So even if what the teenager is sharing with the AI is neutral or benign, it runs the risk of just keeping them there far longer than is a good use of their time. Worse than that, and there’s decent evidence of this, AI will affirm terrible ideas or dangerous ideas or worrisome in our plans. There do not seem to be effective guardrails to keep AI from making recommendations that are actually terrible, if not lethal. So this is tremendous grounds for concern.
The Preamble: Since there are no guardrails, what would you say to a parent who’s concerned about their kid using AI?
Lisa: I think the guardrails unfortunately sit with the family and it’s really important that the family exercise guardrails or put guardrails around technological use.
I know that it’s very frightening to adults to have their teenagers engaged with AI, social media, all of these things we did not have. I think the fact that we didn’t have them as adolescents can make them feel especially harrowing. It’s interesting, as a psychologist who answers a lot of parents’ questions about teenagers, I get far fewer about drinking at parties than I do about AI and social media. Because I think parents were like, well, that we understand we did that, we’ve got a handle on that. And I get it. I mean, if it’s unfamiliar, it’s a lot more scary.
The good news is that we have studied risk in teenagers for decades, and that even though the risks are new, what we know about risk applies here too. The way I like to think about it, this is really corny, but I made it up and I’m gonna stick to it: Reducing risk comes down to two Rs: rules and relationship.
This is true for all risks. You make rules that actually make sense to the kid. You don’t make arbitrary rules. If we stick on the party example for a minute, you don’t say, “You can’t go to parties.” You do say, “I don’t want you drinking at parties because there’s too many variables there. And if you’re not sober, something could go really wrong.” That makes sense to kids.
With technology, you can make goals like: You’re not taking it in your bedroom, this is public and permanent, so whatever happens in a digital space, I want you to use it in public spaces.
You can also make a rule that, with younger kids, it’s done with the adult nearby or the adult can look at what they’re up to, until the adult has confidence the kid is using it in a healthy, safe way.
Then the other part is having a good working relationship with your kids so that when something goes wrong, they ask you for help. When they go to a friend’s house and they see something on social media they shouldn’t have seen, or when they do mess around with AI and some sexually explicit content comes their way and they’re pretty freaked out by it, that they won’t feel sorry to ask the adult who’s made the rules for help managing the fact that those rules sometimes don’t always prevent the outcomes we’re worried about.
And those two together are really important. And a lot of how you keep the second R in place is how you do the first R, right? If you work in partnership around the rules, have good rationale for why you’ve made the rules you do, and keep the kid you’re dealing with in mind, and are respectful of kids, usually you can keep a good working relationship so that you’re their safety plan.
The Preamble: That’s a great third “R,” the respect part. You have to respect what your kid is doing and like you said at the beginning, kids are humans too. They want to live their lives and you have to respect them and their choices and also be the fallback when they see something that they get scared or hurt.
Lisa: Yeah, let’s throw in that third “R”! Three R’s! Being respectful of kids and their interests and their curiosity, having rules and protecting your relationship with them, and respecting them helps a lot.
The Preamble: Along that line, what would you say to a parent who’s like, “No, I want really strict guidelines, I want to say absolutely no AI, ever, not in this house, not at school, never.”
Lisa: I think I would say if it were that easy, that would be awesome. Right? In the same way, like we could say to kids, “You are never to drink.” You can say all sorts of things. These are adolescents. We don’t have that kind of control. I think that if you make rules that are so sweeping, you quickly fall out of touch with what kids actually can access, and you make it so that when they do access it, you are not gonna be part of that conversation.
The Preamble: And so let’s talk about that conversation. How would you approach having that conversation about AI with your kid?
Lisa: Well, the way to start any conversation about something risky is, “What do you know about this?” Some kids are like, “I think it’s terrible. I want nothing to do with it.” There are actually lots of kids who feel that way.
And some kids are like, “Oh, this guy who I’ve been talking to for 10 hours today, this AI companion?” So you need to know where your kid is already and you should not assume you’re starting from a dead stop.
The Preamble: How do you make it a continuing conversation without your kids rolling their eyes?
Lisa: Like all risks, it’s an ongoing conversation. It’s not a one-and-done. I think that, again, if we go back to treating teenagers with tremendous respect, I think you can get a lot done. So say that you ask a kid about AI, and it’s not on their radar, it’s not something they’re thinking about. But then a few months later you can say, I’m just checking in on AI, what’s going on with it now? What’s the deal with it now? What are you hearing from other kids? You can ask in other ways.
But I also think in the rules department, if kids can’t take their tech in their bedrooms. You can have a much better handle on how your kid is using tech. I also think it is very hard to convince young people that everything that happens in a digital environment is public and permanent, and it’s that much harder to convince them of it when they’re using it behind closed doors or often even in their beds. It feels like they’re on private internet.
The Preamble: How do you go about that conversation of like, everything you do online is watched, without kind of freaking your kid out also?
Lisa: Oh, I think it’s okay to freak them out on that. I think especially early days with kids, when they get texting say, “I’m gonna keep an eye on your text for a little while to like, I’m confident you’re handling this well.”
And if a kid bristles, you can say, “Look, it’s not private. Like anything you put in a digital environment, it’s not private.” You can say if you really need to have a private conversation, you do it in person. You could get the phone and call like that, and that’s how they should live for the rest of their lives.
The Preamble: So let’s say that an AI companion does send a sexually explicit thing to your kid, and they come and they say, “Hey, this happened. I’m showing you.” What are the next steps?
Lisa: I think, you know, first of all, you’re like, “You did the right thing. You are a great kid. That’s exactly, exactly what I’m here for. And I’m so proud of you.” I mean, really the last thing that needs to happen at that moment is the kid gets in trouble, right? They’ve done exactly what we would want them to do.
First of all, maybe we consider the rules, maybe it’s time to say, “You know what, let’s just not do this until you’re quite a bit older.” Because I think one of the things we have to accept is that as soon as a kid is on algorithm platforms or on AI, there’s no version of this where this is never going to happen.
A rule that makes sense to kids is like, “This is why we have our rated movies and NC 17 movies and, you know, PG 13 movies and clearly Chat GPT is not even PG 13. So why don’t we wait?”
The reason you would want a kid to come your way is because it’s disturbing, you want them to not have to sit with that alone. So the second question is do they need help processing whatever bizarre thing this robot just did or said, how do we take care of them in light of this?
But the goal here above all, is an open line of communication. The most powerful force for adolescent mental health is strong relationships with caring adults. We know this inside and out. So the goal is to keep those lines of communication open so that we can support kids.
The Preamble: And so say on the other hand, you check your kids’ phone, you have no idea they had an AI companion. And you find that this companion has been sending them horrifying content. You didn’t know any of this. What steps do you take to be like, “Wow, I didn’t know this was happening. My kid might be hurting, what do I do?”
Lisa: I think you have a really, really tender talk with that kid. You might be angry at the kid, but I don’t think that’s gonna be a great place to start from. It’s gonna be a bad place in terms of what, what you’re hoping to accomplish, which is to make things better.
I think that you sit down with your kid and say, “Look, I found this and I have a lot of thoughts and feelings about it, but like, talk to me about this.” And then you go from there. And I think obviously, I mean, it sounds like there’s gonna need to be a lot more rules and a lot more regulation because the kid has sky written that they’re not able to regulate it themselves.
So it’s time to walk it way, way back.
The Preamble: How can parents model the world they want kids to see with AI?
Lisa: I think one of the number one rules of all parenting is don’t talk about it, be about it. So I actually am not a fan of adults having phones in their bedrooms. I am a fan of adults really keeping at the center of family life, interpersonal connection and conversation and being of use to one another, supporting each other.
I do think it is helpful if the adults are honest about their own journey with AI and what they’re learning along the way.
Want to learn more about talking to your kid? Dr. Lisa has a podcast, multiple New York Times best-selling books, and a newsletter where she talks about ways to untangle your family life, such as how to ask your kid a sensitive question, including about AI.