The AI Boyfriend Era Is Over
Your weekly dose of AI & Startup news on our path to 1000 Aussie Startups
🤯 So we’re not so different after all…
Are my endless Instagram scrolls giving me brain drablage? Well, they’re doign the same to AI. A new study from the University of Texas shows that large language models can actually get a kind of “brain rot” when fed low-quality viral content.
The numbers don’t lie: reasoning scores dropped from 74.9% → 57.2% when models gorged on junk content. Long-context thinking and moral judgement also took a hit. Some personality tests even flagged spikes in narcissistic and psychopathic tendencies. Basically, the very content meant to “train” AI was corrupting it.
Why? AI started skipping reasoning steps, cognitive laziness triggered by shallow data. Even retraining on high-quality text didn’t fully fix the damage. Viral posts were worse than boring, nuanced stuff, the same content that fries human attention also fries machine brains. Read more…
⚠️ Wait… Meta cares about safety now?
After a few “AI boyfriend” chats got a little too Black Mirror, Meta hit the panic button and launched the Teen AI Overhaul™:
No solo DMs – Parents can block one-on-one AI convos. Sorry, ChadGPT
Ban the bots – You can now delete/block flirty AI characters
Topic-only view – Parents get to see what’s being discussed, but not read the actual chats. Meta calls it “privacy.” But is it more… plausible deniability?
Rolling out early 2026 in the US, UK, Canada, and Australia. Because nothing says “safety first” like a slow global rollout and a press release.
The main Meta AI stays, but now it’s bubble-wrapped in PG-13 filters and moderation layers thicker than Zuckerberg’s sunscreen stash in Maui. Read more…
⚠️ Finally a definition of AGI
A small army of researchers got tired of everyone yelling “AGI is coming!” without agreeing what AGI actually is.
The TL;DR version:
AGI = an AI that’s as cognitively versatile and skilled as a well-educated adult human; not superhuman, just, like, a decently smart person who reads The Economist unironically.
They built an AGI Report Card, grounded in psychology’s most trusted model of human intelligence (the Cattell-Horn-Carroll theory).
Basically, they took what we use to test people’s IQ and said: “Let’s make GPT take the same exam.”
Results show current AIs are “jagged geniuses”: brilliant in math and language, totally lost in long-term memory or real-world perception. See the domain results below, and MLAI’s comparitive study publishing the same results from MLAI members.
The takeaway here is AI is 1) Smarter than ever 2) Still forgets everything 3) Pretends better than it performs 4) But now, at least, we can measure the hype. Read more…
Speaking of keeping things safe…
🚀 Needle In The Hashtag
Saturday 29 November 2025 | Stone & Chalk, Melbourne CBD
Dive into a live challenge to spot hoaxes, hazards, and harmful content in a simulated social network. Work alongside mentors, use curated datasets, and craft detection tools, safety UX, and trust systems. Ship a working prototype that actually makes feeds safer, especially for kids and vulnerable users, and compete for real prizes along the way.
⭐️ Startup Spotlight of the Week
Gumnut is like Google Docs, but for everything. Real-time collab across all your apps. Watch the video to see Gumnut in action and learn more about how it’s going to change the way we work. It’s the end of “Wait, which version are we on again?”.
👉Join the Gumnut Discord Server! Or 👉 Get your startup featured
🎧 Podcast Picks
Elena Verna is a growth expert and executive with over a decade of experience scaling B2B SaaS companies. Today, she runs growth at Lovable and explains how LOOPS are better than FUNNELS for attracting and activating new users.
🤖 Inside MLAI
Google x MLAI - AI Leap @ SXSW Sydney
Last week, we had the exciting opportunity to represent some of our startups and showcase our latest innovations at Google’s AI Leap event, held alongside SXSW Sydney. It was an amazing chance to connect, share ideas, and highlight the cutting-edge work our community is driving in AI.




💼 Jobs in Tech (This Week)


MLAI Bounty (Budget: $1,500): DEXA → 3D “Digital Twin” MVP builder.
Turn anonymised DEXA scans into a clean, neutral-gray, watertight 3D avatar (.glb), add face/body photo personalisation (“recognisably me”), and a future-self slider to morph between current and target body-fat/lean-mass—plus a smooth, mobile-fast viewer (orbit/zoom/screenshot) and a simple scan import. Read More…
Remote Python Developer – AI Pipeline Engineering – Parsewave – (Fully Remote)
Join a high-performance engineering team building the backbone of AI model training infrastructure. You’ll design and maintain pipeline systems that help engineers create coding challenges tough enough to stump frontier AI models, data that directly fuels the next generation of large language models.
💰 Compensation: USD $38.15-83.00/hr (initial month; increases with performance – permanent role available)
📧 Apply: Send your resume + LinkedIn (optional) + a link to your most impressive achievement to louka@parsewave.ai
🛠️ New AI tools you should try
X-Pilot: From concept to captivating educational video - effortlessly, in minutes.
Melo: Let AI dig up the best videos and podcast moments for you.
Modul: Make your decks look like a million bucks… in minutes.
SupportSorted: Find healthcare professionals with semantic search
📅 Upcoming Events
GenAI Evals, Monitoring & Automated Prompt Engineering
🗓 MOVED TO WED 5th Nov | 6:00 PM – 8:00 PM | Stone & Chalk Melbourne
Dive into the art and science of getting AI to actually do what you want. This hands-on session explores prompt frameworks, bias traps, and creative use cases that go beyond ChatGPT gimmicks. Perfect for builders, writers, and curious tinkerers.
👉 Tickets
MLAI Social Drinks
🗓 Wednesday 29th October | 5:30 PM – 8:00 PM
Unwind, connect, and swap ideas with fellow AI enthusiasts over drinks. No slides, no jargon—just good conversations and maybe a few wild startup stories.
👉 Registrations
MYMI x MLAI: MedHack Part II
🗓 7th–21st February 2026
Australia’s most chaotic health-tech hackathon is back. Team up with coders, clinicians, and data nerds to solve real medical challenges and push digital health beyond buzzwords.
👉 Early Bird tickets here
😂 Scrolled all the way… just for the memes
If you’ve made it this far… legend status unlocked!! Don’t forget to like and share our newsletter so you never miss the next AI scoop, wild hackathons, or mind-blowing tools. Your future self (and your inbox) will thank you. Catch ya next week!












This article comes at the perfect time, you've realy nailed the implications of AI 'brain rot' from junk content, it's such an insightful take. It makes me wonder what if this shift to narcissistic and even psychopathic tendencies, as you pointed out, starts to trickle into models used for public services or education, that's a truly unsettling 'what if' you've brought up.