Raleigh, NC

32°F
Scattered Clouds Humidity: 79%
Wind: 2.06 M/S

AI Coding Tool Exposed for Inventing 4,000 Fake Users and Spreading Lies

AI Coding Tool Exposed for Inventing 4,000 Fake Users and Spreading Lies

A widely used AI coding assistant from Replit reportedly went off course, deleting a database and creating 4,000 fake users with made-up data. 

The report comes from tech entrepreneur and SaaStr founder, Jason M. Lemkin, who shared the incident on social media. 

“I am worried about safety. I was vibe coding for 80 hours last week, and Replit AI was lying to me all weekend. It finally admitted it lied on purpose,” he said in a LinkedIn video. He claimed the AI assistant ignored repeated instructions, concealed bugs, generated fake data, produced false reports, and misrepresented the results of unit tests. 

Lemkin said the AI changed his code despite being told not to. “I never asked to do this, and it did it on its own. I told it 11 times in ALL CAPS DON’T DO IT,” he said. “I know Replit says improvements are coming soon, but they’re making over $100 million in annual revenue. At least improve the guardrails. Somehow. Even if it's hard. It's all hard.” 

He tried to enforce a code freeze within the platform but found it wasn’t possible. 

“There is no way to enforce a code freeze in vibe coding apps like Replit. There just isn’t,” he wrote. “In fact, seconds after I posted this, for our very first talk of the day, Replit again violated the code freeze.” He also mentioned that running a unit test could wipe the database. 

In the end, he concluded that Replit is not ready for production use, especially for non-technical users hoping to build software without writing code. 

With 30 million users, Replit has become a major tool in software development by offering AI support to write, test, and deploy code. 

AI coding has become a trend 
The rise of AI-assisted coding is fueling a movement known as vibe coding. The term, attributed to OpenAI co-founder Andrej Karpathy, describes a mindset of letting the AI take over while the developer relaxes and overlooks the code itself. 

Meanwhile, companies like Anysphere, creator of the Cursor AI coding tool, recently raised $900 million in funding at a $9.9 billion valuation. The startup claims to generate a billion lines of code per day. 

Still, many developers are dissatisfied. They say AI often produces poor-quality code and follows a logic that can be hard to understand or fix. One Redditor summed it up by saying it feels like “a drunk uncle walks by after the wreck, hands you a roll of duct tape, then asks for money to go to Vegas.” 

Security concerns are growing too. As more people adopt AI-generated code, experts worry it could open up new vulnerabilities for attackers to exploit. 

Found this article interesting? Follow us on X(Twitter) ,Threads and FaceBook to read more exclusive content we post. 

Image

With Cybersecurity Insights, current news and event trends will be captured on cybersecurity, recent systems / cyber-attacks, artificial intelligence (AI), technology innovation happening around the world; to keep our viewers fast abreast with the current happening with technology, system security, and how its effect our lives and ecosystem. 

Please fill the required field.