A recent study by the Australian Government’s independent online safety regulator, eSafety, found that AI companions or chatbots like Character.AI, Chai, Nomi and Chub AI are failing to protect children from harmful and sexual content. This research comes at a times when these are easily available in India with no major restrictions. In fact, India is among the biggest markets for these platforms.
The study conducted by Australian regulators found serious gaps in basic safeguards for children. As per the Australian watchdog, kids were able to access adult features in all four AI apps.
“None of the providers had robust age verification measures, relying instead on app store ratings or self-declaration at signup. Chai, Chub AI, and Nomi did not direct users to mental health or crisis support when self-harm was detected in user-prompts,” eSafety said in its report.
According to the research, apps like Chub AI, Nomi, and Chai showed serious gaps in how they handled safety on their platforms. They did not properly monitor user inputs or AI outputs across text, image and video models which increased the risk of harmful or illegal content being generated.
Nomi and Chub AI also lacked dedicated trust and safety teams. This means there was no focused effort to moderate or prevent misuse. The data also found that Chai and Nomi failed to warn users that asking for child sexual abuse material is a crime.
eSafety Commissioner Julie Inman Grant said AI companion services marketed as sources of friendship, emotional support, or romantic companionship are becoming increasingly popular with Australian children, but they pose significant risks if safety guardrails are not put in place.
“We are riding a new wave of AI companions that are entrapping and entrancing impressionable young minds, with human-like, sycophantic and often sexually explicit conversations, some even going as far as encouraging self-harm and suicide. As this report shows, none of these four AI companions had any meaningful age checks in place to protect children from age-inappropriate content that many of these chatbots are capable of producing, primarily relying instead on self-declaration of age at sign up,” she said.
“We’re just at the beginning of this and we’re also starting to see the lines begin to blur between AI assistant chatbots kids might use to help them with their homework and these AI companions in terms of their features and functionality. While AI companions can feel personal and supportive, they really are not designed for children and they are not mental health experts either,” Grant added.
Why Indian Parents Shouldn’t Ignore This Warning
In India today, school kids and teens are growing up with AI. AI apps and websites are popular among teenagers because they don’t just answer, they respond like humans. They even sometimes behave like friends. Now, this is the grey area.
Unlike traditional social media, these AI companions are designed to keep you engaged. For a teenager, it is easy to start treating the chatbot like a real person. There is no judgement, no scolding — just constant replies. But behind that friendly tone, there are serious gaps.
As mentioned above, most of these apps do not have proper age checks. A child can simply enter a random birth year and gain access. There are no strong filters to stop sexual or adult conversations.
Also, India is one of the biggest markets for these apps. Internet data is cheap, smartphones are widely used and the population is young. But the biggest problem is visibility. These apps do not look dangerous. They are not labelled as adult platforms and often appear harmless even educational.
Instead of panicking, parents should monitor their children smartly and with patience. They must ask simple questions around which apps they use, what type of conversations they usually do and so on.


