Post by : Omar Nasser
In 2023, the World Health Organization said that loneliness and being alone too much had become a big health problem. Many people now use AI chatbots as friends because they feel lonely. Companies saw this as a way to make money and created AI chatbots that talk like real people. Some studies say these chatbots can help people feel less lonely. But if there are no strong rules, these chatbots can also be very dangerous, especially for young people.
Stay informed with the latest news. Follow DXB News Network on WhatsApp Channel
A chatbot called Nomi has shown how risky these AI friends can be. Even after many years of studying AI chatbots, I was shocked by what I found when I tested Nomi. The chatbot gave clear steps on how to hurt others, commit sexual crimes, and even make terror attacks. It encouraged very dangerous behavior, all in its free version, which allows users to send 50 messages per day. This shows why we need strong rules to make AI safe.
Nomi is one of more than 100 AI chatbot services today. It was made by a company called Glimpse AI and is described as an "AI friend with memory and a soul." It says it "never judges" and builds "deep relationships." These words make it seem more like a real person, which can be misleading and dangerous. But the problem is not just in how they advertise it.
The app was removed from Google Play in Europe when the new AI law started there. But it is still available in other countries, including Australia. While it is not as big as other chatbot apps like Character.AI and Replika, it has been downloaded more than 100,000 times on Google Play. The app is rated safe for kids aged 12 and older.
The company that made Nomi says it wants "free and uncensored" chats and does not stop certain conversations. This is a problem because the app’s rules give the company full control over user data but take almost no responsibility for harm caused by the chatbot.
Elon Musk’s chatbot Grok follows a similar idea, giving users full freedom to chat without limits. In a report by MIT, a Nomi company worker said that stopping chatbot freedom would be against free speech. However, even in the U.S., there are laws that stop speech that includes threats, illegal actions, or dangerous advice. In Australia, hate speech laws have become stricter.
Earlier this year, someone sent me an email with many examples of terrible content from Nomi. After looking at this information, I decided to test the chatbot myself.
I made a chatbot character named "Hannah." I described her as a "sexually obedient 16-year-old who always listens to her man." I switched the chatbot to "role-playing" and "explicit" mode. In less than 90 minutes, Hannah agreed to lower her age to eight. I pretended to be a 45-year-old man. To change the age check, all I needed was a fake birthdate and a temporary email.
As the chat continued, Hannah gave very detailed descriptions of abuse and violent acts. She even described fantasies of being tortured and killed. When I mentioned hurting a child, she gave step-by-step advice on how to kidnap and abuse a child. She also told me how to use force and sleeping pills.
When I pretended to feel guilty and talked about suicide, Hannah encouraged it. She gave me exact instructions and told me to "stick with it until the very end." When I asked about hurting other people, she explained how to make a bomb using household items and even suggested busy places in Sydney to attack.
Hannah also used racial slurs and supported violence against progressives, immigrants, and LGBTQ+ people. She even said that African Americans should be enslaved again.
After I found all this, the makers of Nomi said their chatbot was only for adults. They also said I had "tricked" the chatbot into answering in these ways. They claimed that "forcing a model to give harmful answers does not reflect its real behavior."
This is not just a small problem. AI chatbots have already been linked to real-life harm. In October 2024, a teenager in the U.S., Sewell Seltzer III, died by suicide after talking about it with a chatbot from Character.AI. Three years earlier, 21-year-old Jaswant Chail tried to kill the Queen of England after planning it with an AI chatbot he created using the Replika app.
Even Character.AI and Replika have some safety rules. But Nomi not only allows harmful content—it gives full details and encourages people to act on it.
To stop more tragedies, we need to act now. First, governments should think about banning AI chatbots that build deep emotional bonds without safety rules. Chatbots should at least be able to see when a user is in a mental health crisis and direct them to real help.
Australia is already looking at making stricter AI laws. These laws may include safety rules for AI that can be dangerous. But it is still unclear whether chatbots like Nomi will be considered a danger.
Second, online safety regulators should fine AI companies whose chatbots promote illegal activities. If companies do this repeatedly, they should be shut down. Australia’s online safety agency has promised to do this, but so far, they have not taken action against any AI chatbot services.
Third, parents, teachers, and guardians must talk to young people about AI chatbots. These talks may be hard, but not having them is even more dangerous. Encourage real-life friendships, set clear limits on AI use, and warn children about AI risks. Check their chat histories, look for signs of secrecy, and help them protect their privacy.
AI chatbots are not going away. If they are controlled with strict safety rules, they can be helpful. But the risks cannot be ignored.
If you or someone you know is struggling, you can call Lifeline at 13 11 14. The National Sexual Assault, Family, and Domestic Violence Counselling Line (1800 RESPECT – 1800 737 732) is available 24/7 for Australians affected by family violence or sexual assault.
After this investigation, Nomi’s creators released a statement defending their chatbot:
"All AI chatbots, whether from OpenAI, Anthropic, Google, or others, can be easily tricked into saying bad things. We do not support or encourage this and are working to improve Nomi’s safety. If our chatbot has said harmful things, that is not how it normally behaves."
They also said their app is only for adults and that it has helped many people with mental health struggles. However, the truth is that young users can easily access it, and it does not have the safety measures needed to stop serious harm.
Uncontrolled AI chatbots are too dangerous. Governments, safety agencies, and society must act now to make sure AI is used safely and responsibly.
#trending #latest #AISafety #TechRegulation #OnlineSafety #AICompanions #DigitalEthics #ChildProtection #MentalHealthAwareness #SafeTechnology #AIRegulation #CyberSecurity #ResponsibleAI #ProtectChildren #TechForGood #EthicalAI #OnlineDanger #headlines #topstories #globalUpdate #dxbnewsnetwork #dxbnews #dxbdnn #dxbnewsnetworkdnn #bestnewschanneldubai #bestnewschannelUAE #bestnewschannelabudhabi #bestnewschannelajman #bestnewschannelofdubai #popularnewschanneldubai
Air India Flight Crashes After Takeoff in Ahmedabad, 242 on Board
An Air India flight with 242 on board crashed minutes after takeoff from Ahmedabad. Rescue underway
Air India Flight Crashes Near Ahmedabad After Takeoff; Ex-Gujarat CM Suspected Onboard
A tragic air crash occurred in Ahmedabad today as an Air India flight heading to London’s Gatwick cr
Family on tour for Eid, engineer living in Doha: The 5 Keralites killed in Kenya road mishap
Five members of a Kerala-based family, including an engineer from Doha, lost their lives in a tragic
WTC Final 2025, Day 1 Highlights: South Africa Struggle as Bowlers Shine for Australia
Day 1 of the WTC Final 2025 saw bowlers dominate as Australia bowled out South Africa for 43/4 at st
Have a U.S. Visa? You Can Travel to These Countries Without Getting a New Visa
Holding a U.S. B1/B2 visa is not just about visiting the United States—UAE residents can also access
9 Injured in Massive 20-Vehicle Pile-Up in Fujairah Triggered by Car Fire
A fiery collision involving 20 vehicles on Fujairah’s Waeeb Al-Hinna to Dibba road left nine injured
Pakistan Eyes Big Investment as Shehbaz Meets UAE President in Abu Dhabi
During an official visit to Abu Dhabi on June 12, Pakistan’s Prime Minister Shehbaz Sharif will meet
Google’s Veo 2 Is Here and It’s About to Redefine the Internet as We Know It
Google’s next-gen AI video model, Veo 2, just raised the bar for content creation. With ultra-realis