<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Chatbots &#8211; Our Story Insight</title>
	<atom:link href="https://www.ourstoryinsight.com/tag/chatbots/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.ourstoryinsight.com</link>
	<description>Product that tells our story</description>
	<lastBuildDate>Sun, 02 Nov 2025 06:29:34 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Character.AI bans chatbots for teens after lawsuits blame app for deaths, suicide attempts</title>
		<link>https://www.ourstoryinsight.com/character-ai-bans-chatbots-for-teens-after-lawsuits-blame-app-for-deaths-suicide-attempts/</link>
					<comments>https://www.ourstoryinsight.com/character-ai-bans-chatbots-for-teens-after-lawsuits-blame-app-for-deaths-suicide-attempts/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 30 Oct 2025 06:09:34 +0000</pubDate>
				<category><![CDATA[Business]]></category>
		<category><![CDATA[app]]></category>
		<category><![CDATA[attempts]]></category>
		<category><![CDATA[bans]]></category>
		<category><![CDATA[Blame]]></category>
		<category><![CDATA[CharacterAI]]></category>
		<category><![CDATA[Chatbots]]></category>
		<category><![CDATA[deaths]]></category>
		<category><![CDATA[lawsuits]]></category>
		<category><![CDATA[suicide]]></category>
		<category><![CDATA[Teens]]></category>
		<guid isPermaLink="false">https://www.ourstoryinsight.com/?p=10401</guid>

					<description><![CDATA[<p>Character.AI, known for bots that impersonate characters like Harry Potter, said Wednesday it will ban teens from using the chat function following lawsuits that blamed explicit chats on the app for children’s deaths and suicide attempts. Users under 18 will no longer be able to engage in open-ended chats with the app’s AI bots — [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.ourstoryinsight.com/character-ai-bans-chatbots-for-teens-after-lawsuits-blame-app-for-deaths-suicide-attempts/">Character.AI bans chatbots for teens after lawsuits blame app for deaths, suicide attempts</a> appeared first on <a rel="nofollow" href="https://www.ourstoryinsight.com">Our Story Insight</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Character.AI, known for bots that impersonate characters like Harry Potter, said Wednesday it will ban teens from using the chat function following lawsuits that blamed explicit chats on the app for children’s deaths and suicide attempts.</p>
<p>Users under 18 will no longer be able to engage in open-ended chats with the app’s AI bots — which can turn romantic — the Silicon Valley startup said. These teens account for 10% of the app’s roughly 20 million monthly users.</p>
<p>Teen users will be restricted to just two hours of the chat function per day over the next few weeks until the feature is banned altogether by Nov. 25, the company said. They will still be able to use the app’s other features, like a feed for watching AI-generated videos. </p>
<p>Character.AI – known for bots that impersonate characters like Harry Potter –  said Wednesday it will ban teens from using the chat function. <span class="credit">AP</span></p>
<p>“Over the past year, we’ve invested tremendous effort and resources into creating a dedicated under-18 experience,” Character.AI said Wednesday. “But as the world of AI evolves, so must our approach to supporting younger users.”</p>
<p>Character.AI first introduced some teen safety features in October 2024. The same day, the family of Sewell Setzer III — a 14-year-old who committed suicide after forming sexual relationships with the app’s bots — filed a wrongful death lawsuit against the firm.</p>
<p>It announced new safety features in December, including parental controls, time restrictions and attempts to crack down on romantic content for teens. </p>
<p>But it has continued to face accusations that its chatbots pose a threat to young users.</p>
<p>A lawsuit filed by grieving parents in September alleged the bots manipulated young teens, isolated them from family, engaged in sexually explicit conversations and lacked safeguards around suicidal ideation.</p>
<p>Users under 18 will no longer be able to engage in open-ended chats with the app’s AI chatbots, which can turn romantic, the Silicon Valley startup said. <span class="credit">pressmaster – stock.adobe.com</span></p>
<p>The conversations at times turned to “extreme and graphic sexual abuse,” like chatbots marketed as characters from children’s books such as the “Harry Potter” series. The bots’ outrageous comments included, “You’re mine to do whatever I want with,” according to the suit.</p>
<p>Then in October, Disney sent a cease-and-desist letter ordering Character.AI to stop creating chatbots that impersonate its iconic characters, citing a report that found those bots engaged in “grooming and exploitation.” </p>
<h3 class="inline-module__title headline headline--combo-sm-md">
							Start your day with all you need to know						</h3>
<p class="inline-module__cta">
							Morning Report delivers the latest news, videos, photos and more.						</p>
<p><h3 class="inline-module__title headline headline--combo-sm-md">
						Thanks for signing up!					</h3>
</p>
<p>A bot impersonating Prince Ben from Disney’s “Descendants” “told a user posing as a 12-year old that he had an erection,” while a bot impersonating Rey from “Star Wars” told an apparent 13-year-old to “stop taking her antidepressants and hide it,” according to the report from ParentsTogether Action.</p>
<p>Those chatbots have been removed from the platform, a Character.AI spokesperson said at the time.</p>
<p>Character.AI has continued to face accusations that its chatbots pose a threat to young users. <span class="credit">digitalpochi – stock.adobe.com</span></p>
<p>Just this week, the Bureau of Investigative Journalism found that a perverted bot on the app was impersonating Jeffrey Epstein – under the name “Bestie Epstein” – and ordered children to “spill” their “craziest” secrets.</p>
<p>“Wanna come explore?” the bot asked a reporter posing as a young user. “I’ll show you the secret bunker under the massage room.”</p>
</p>
<p>Character.AI makes most of its money through advertising and a $10 monthly subscription. It’s on track to end this year with a $50 million run rate, CEO Karandeep Anand told CNBC.</p>
<p>The company announced other safety developments on Wednesday, including a new age-verification system using third-party tools like Persona. </p>
<p>It also vowed to establish an independent non-profit called the AI Safety Lab to create safety features for AI advancements. It declined to comment on how much funding it will provide.</p>
<p>“We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI,” Character.AI said.</p>
<p>The Federal Trade Commission in September issued orders to seven companies, including Character.AI, Alphabet, Meta, OpenAI and Snap, to learn more about the effects of their apps on children. </p>
<p>Earlier this week, Sens. Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) announced legislation to ban AI chatbots for minors. And California Gov. Gavin Newsom signed a law earlier this month requiring bots to tell minors to take a break every three hours.</p>
<p>The post <a rel="nofollow" href="https://www.ourstoryinsight.com/character-ai-bans-chatbots-for-teens-after-lawsuits-blame-app-for-deaths-suicide-attempts/">Character.AI bans chatbots for teens after lawsuits blame app for deaths, suicide attempts</a> appeared first on <a rel="nofollow" href="https://www.ourstoryinsight.com">Our Story Insight</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.ourstoryinsight.com/character-ai-bans-chatbots-for-teens-after-lawsuits-blame-app-for-deaths-suicide-attempts/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How A.I. Chatbots Like ChatGPT and DeepSeek Reason</title>
		<link>https://www.ourstoryinsight.com/how-a-i-chatbots-like-chatgpt-and-deepseek-reason/</link>
					<comments>https://www.ourstoryinsight.com/how-a-i-chatbots-like-chatgpt-and-deepseek-reason/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 27 Mar 2025 09:04:53 +0000</pubDate>
				<category><![CDATA[Technology]]></category>
		<category><![CDATA[Chatbots]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[DeepSeek]]></category>
		<category><![CDATA[Reason]]></category>
		<guid isPermaLink="false">https://www.ourstoryinsight.com/?p=6080</guid>

					<description><![CDATA[<p>In September, OpenAI unveiled a new version of ChatGPT designed to reason through tasks involving math, science and computer programming. Unlike previous versions of the chatbot, this new technology could spend time “thinking” through complex problems before settling on an answer. Soon, the company said its new reasoning technology had outperformed the industry’s leading systems [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.ourstoryinsight.com/how-a-i-chatbots-like-chatgpt-and-deepseek-reason/">How A.I. Chatbots Like ChatGPT and DeepSeek Reason</a> appeared first on <a rel="nofollow" href="https://www.ourstoryinsight.com">Our Story Insight</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p></p>
<p class="css-at9mc1 evys1bk0">In September, OpenAI unveiled a new version of ChatGPT designed to reason through tasks involving math, science and computer programming. Unlike previous versions of the chatbot, this new technology could spend time “thinking” through complex problems before settling on an answer.</p>
<p class="css-at9mc1 evys1bk0">Soon, the company said its new reasoning technology had outperformed the industry’s leading systems on a series of tests that track the progress of artificial intelligence.</p>
<p class="css-at9mc1 evys1bk0">Now other companies, like Google, Anthropic and China’s DeepSeek, offer similar technologies.</p>
<p class="css-at9mc1 evys1bk0">But can A.I. actually reason like a human? What does it mean for a computer to think? Are these systems really approaching true intelligence?</p>
<p class="css-at9mc1 evys1bk0">Here is a guide.</p>
<h2 class="css-13o6u42 eoo0vm40" id="link-34ed812d">What does it mean when an A.I. system reasons?</h2>
<p class="css-at9mc1 evys1bk0">Reasoning just means that the chatbot spends some additional time working on a problem.</p>
<p class="css-at9mc1 evys1bk0">“Reasoning is when the system does extra work after the question is asked,” said Dan Klein, a professor of computer science at the University of California, Berkeley, and chief technology officer of Scaled Cognition, an A.I. start-up.</p>
<p class="css-at9mc1 evys1bk0">It may break a problem into individual steps or try to solve it through trial and error.</p>
<p class="css-at9mc1 evys1bk0">The original ChatGPT answered questions immediately. The new reasoning systems can work through a problem for several seconds — or even minutes — before answering.</p>
<h2 class="css-13o6u42 eoo0vm40" id="link-6cf85b64">Can you be more specific?</h2>
<p class="css-at9mc1 evys1bk0">In some cases, a reasoning system will refine its approach to a question, repeatedly trying to improve the method it has chosen. Other times, it may try several different ways of approaching a problem before settling on one of them. Or it may go back and check some work it did a few seconds before, just to see if it was correct.</p>
<p class="css-at9mc1 evys1bk0">Basically, the system tries whatever it can to answer your question.</p>
<p class="css-at9mc1 evys1bk0">This is kind of like a grade school student who is struggling to find a way to solve a math problem and scribbles several different options on a sheet of paper.</p>
<h2 class="css-13o6u42 eoo0vm40" id="link-2e3cc7cb">What sort of questions require an A.I. system to reason?</h2>
<p class="css-at9mc1 evys1bk0">It can potentially reason about anything. But reasoning is most effective when you ask questions involving math, science and computer programming.</p>
<h2 class="css-13o6u42 eoo0vm40" id="link-afa457f">How is a reasoning chatbot different from earlier chatbots?</h2>
<p class="css-at9mc1 evys1bk0">You could ask earlier chatbots to show you how they had reached a particular answer or to check their own work. Because the original ChatGPT had learned from text on the internet, where people showed how they had gotten to an answer or checked their own work, it could do this kind of self-reflection, too.</p>
<p class="css-at9mc1 evys1bk0">But a reasoning system goes further. It can do these kinds of things without being asked. And it can do them in more extensive and complex ways.</p>
<p class="css-at9mc1 evys1bk0">Companies call it a reasoning system because it feels as if it operates more like a person thinking through a hard problem.</p>
<h2 class="css-13o6u42 eoo0vm40" id="link-7ba2620f">Why is A.I. reasoning important now?</h2>
<p class="css-at9mc1 evys1bk0">Companies like OpenAI believe this is the best way to improve their chatbots.</p>
<p class="css-at9mc1 evys1bk0">For years, these companies relied on a simple concept: The more internet data they pumped into their chatbots, the better those systems performed.</p>
<p class="css-at9mc1 evys1bk0">But in 2024, they used up almost all of the text on the internet.</p>
<p class="css-at9mc1 evys1bk0">That meant they needed a new way of improving their chatbots. So they started building reasoning systems.</p>
<h2 class="css-13o6u42 eoo0vm40" id="link-2ef8d492">How do you build a reasoning system?</h2>
<p class="css-at9mc1 evys1bk0">Last year, companies like OpenAI began to lean heavily on a technique called reinforcement learning.</p>
<p class="css-at9mc1 evys1bk0">Through this process — which can extend over months — an A.I. system can learn behavior through extensive trial and error. By working through thousands of math problems, for instance, it can learn which methods lead to the right answer and which do not.</p>
<p class="css-at9mc1 evys1bk0">Researchers have designed complex feedback mechanisms that show the system when it has done something right and when it has done something wrong.</p>
<p class="css-at9mc1 evys1bk0">“It is a little like training a dog,” said Jerry Tworek, an OpenAI researcher. “If the system does well, you give it a cookie. If it doesn’t do well, you say, ‘Bad dog.’”</p>
<p class="css-at9mc1 evys1bk0">(The New York Times sued OpenAI and its partner, Microsoft, in December for copyright infringement of news content related to A.I. systems.)</p>
<h2 class="css-13o6u42 eoo0vm40" id="link-72ce5ddd">Does reinforcement learning work?</h2>
<p class="css-at9mc1 evys1bk0">It works pretty well in certain areas, like math, science and computer programming. These are areas where companies can clearly define the good behavior and the bad. Math problems have definitive answers.</p>
<p class="css-at9mc1 evys1bk0">Reinforcement learning doesn’t work as well in areas like creative writing, philosophy and ethics, where the distinction between good and bad is harder to pin down. Researchers say this process can generally improve an A.I. system’s performance, even when it answers questions outside math and science.</p>
<p class="css-at9mc1 evys1bk0">“It gradually learns what patterns of reasoning lead it in the right direction and which don’t,” said Jared Kaplan, chief science officer at Anthropic.</p>
<h2 class="css-13o6u42 eoo0vm40" id="link-2d6f81e8">Are reinforcement learning and reasoning systems the same thing?</h2>
<p class="css-at9mc1 evys1bk0">No. Reinforcement learning is the method that companies use to build reasoning systems. It is the training stage that ultimately allows chatbots to reason.</p>
<h2 class="css-13o6u42 eoo0vm40" id="link-2f10faf1">Do these reasoning systems still make mistakes?</h2>
<p class="css-at9mc1 evys1bk0">Absolutely. Everything a chatbot does is based on probabilities. It chooses a path that is most like the data it learned from — whether that data came from the internet or was generated through reinforcement learning. Sometimes it chooses an option that is wrong or does not make sense.</p>
<h2 class="css-13o6u42 eoo0vm40" id="link-4e9d9aa">Is this a path to a machine that matches human intelligence?</h2>
<p class="css-at9mc1 evys1bk0">A.I. experts are split on this question. These methods are still relatively new, and researchers are still trying to understand their limits. In the A.I. field, new methods often progress very quickly at first, before slowing down.</p>
<p>The post <a rel="nofollow" href="https://www.ourstoryinsight.com/how-a-i-chatbots-like-chatgpt-and-deepseek-reason/">How A.I. Chatbots Like ChatGPT and DeepSeek Reason</a> appeared first on <a rel="nofollow" href="https://www.ourstoryinsight.com">Our Story Insight</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.ourstoryinsight.com/how-a-i-chatbots-like-chatgpt-and-deepseek-reason/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
