AI Jobs, News, Events and Memes
Your weekly dose of AI & startup news on our path to 1000 Aussie startups.
MLAI Special Edition: AI in 2025 wrapped!
2025, the year AI stopped being a magic trick and started acting like plumbing.
It’s baked into how we work, how money moves, and how governments glare at each other across oceans. The novelty has officially worn off, and as we finally have a week or two to take a breath, let’s reflect on the year that was!
📚 This Week’s Line-Up
🗞️ What the hell happened in 2025
😎 What is coming in 2026
📄 2025 must know research papers
🎆 What did MLAI do in 2025?
🔥 2026 events to up-skill yourself
🗞️ What the hell happened in 2025? 😎
If 2023 was the Year of Magic, and 2024 was the Year of Hype, 2025 was the Year of Consequences. “F#&k around” is done and we’re comfortably now in ‘Find Out’.
Let’s reflect shall we?
Q1: The Wake-Up Call ⏰
📉 The Black Swan (Jan): DeepSeek R1 DeepSeek (a Chinese startup) released R1, a reasoning model that matched GPT-4 performance for pennies on the dollar. The shock wasn’t technical; it was economic. [DeepSeek Story]
🏗️ Project Stargate Escalates In response, the US announced Project Stargate, a $500B plan to build multi-gigawatt data centers. [Stargate Project]
✨ The Culture Shift (Feb): Vibe Coding Andrej Karpathy coined “vibe coding,” capturing the shift where developers can finally “Jesus, take the wheel!” all our coding [Read Karpathy’s Take]
🔌 The New Standard (Mar): MCP Wins OpenAI adopted Anthropic’s Model Context Protocol (MCP), finally ending the connector chaos and letting models talk to tools properly. [View MCP Announcement]
Q2: The Fracture 🌍
🏰 The Split (May): Sovereign Clouds The UAE launched a gigawatt-scale sovereign AI cloud with Microsoft. I.e"We’ll take your models, but the data stays here." [See G42/OpenAI Deal]
🌐 The Open-Weights Flip (May–Jun) China’s open-weights ecosystem quietly overtook Western counterparts in developer usage, shifting global leverage away from closed models. [See Report]
Q3: The Reality Check 🚧
🚀 The Shift (Aug): GPT-5 & Agents GPT-5 launched, not just as a chatbot, but as an agentic system capable of executing workflows. It doesn't just chat; it does. [ChatGPT 5]
🛡️ The Uncomfortable Truth (Aug–Sep) Reporting confirmed AI-assisted cyber offense is accelerating faster than defense. Anthropic: “Hey everyone! Look how good our model is at hacking!….. Wait… CRISIS. LOOK HOW GOOD OUR MODEL IS AT HACKING [Read Threat Report]
🤝 The Copyright Check (Sep): Anthropic Settles Anthropic broke ranks by settling its major copyright lawsuits rather than fighting to the death. By writing a massive check to publishers, they signaled that the era of free training data is officially over. [Read NYT Report]
Q4: The Industrial Scale 🏭
⚡ The Real Bottleneck: Power. Nvidia crossed a $5 trillion valuation, but the real commodity became electricity. Tech giants started signing decades-long nuclear contracts. [Amazon Report] [Nvidia $5 Trillion Report]
🤖 The Wall: Robotics Stalls. Tesla missed its ambitious Optimus deployment targets. Turns out, physical intelligence is infinitely harder than digital intelligence. Robots are coming, just... very slowly. [See Tesla Optimus Delay Report]
🦘 OpenAI Says G'day OpenAI officially opened in Sydney. With 25 million Aussies using ChatGPT (mostly for strata complaints), they finally sent a team down under to manage the growth. [OpenAI Opening News]
🪦 RIP LaunchVic Bad news for Victorian founders. The state government announced the dissolution of LaunchVic as a standalone entity to cut costs, shifting grants to Invest Victoria. [Read LaunchVic Shut Down]
🏛️ The Hard Pivot: US National Policy The US government finally overrode state-level laws with a unified national framework. The goal? Accelerate deployment and stop the regulatory fragmentation. [Executive Order Report]
💰 The Finale: Capital Chooses Sides SoftBank finalized a $40B investment in OpenAI. Japanese capital meets American infrastructure to close out the year. [See SoftBank Investment]
🧠 Some big themes: The Models Learned to Chill
AI learned the power of the pause. Instead of blurting out answers with the unearned confidence of a first-year management consultant, the new models actually stop to think. They plan, they check their work, and miracle of miracles, they occasionally admit they might be wrong.
This reasoning capability turned passive chatbots into active “Agents.” They aren’t just generating text anymore; they’re running workflows. Basically, we went from “Write me an email” to “Fix this mess,” and the results have kicked off a massive research arms race.
💳 The “Don’t Tell IT” Economy
The money finally arrived. Big AI players are pulling in $20 billion a year (or so they say…), and half of U.S. businesses are paying for the privilege. But the funniest stat of the year isn’t the corporate contracts; it’s the Shadow AI.
Recent surveys show employees are adopting AI way faster than their bosses. In fact, most practitioners are paying for tools themselves. When people are secretly putting enterprise software on their personal Visa cards just to survive the workday, you know a technology has officially crossed over.
⚡ Physics Bites Back
All this digital brilliance slammed headfirst into a very physical wall: Electricity.
It turns out, thinking requires a lot of juice. We aren’t measuring data centers in racks anymore; we’re measuring them in gigawatts. These aren’t server rooms; they are nation-state industrial projects. This has birthed the era of “AI Sovereignty,” which is just a fancy way of saying every country wants its own brain, its own power plant, and its own off-switch. The cost of running data centres are also one of the major concerns leading to the worries of the AI bubble.
🧪 Science & Safety: Less Terminator, More Lab Coat
The vibe around safety has shifted, too. The “Killer Robot Apocalypse” arguments have quieted down, maybe because we have more visibility over the limits and capabilities of models, and how they can be really good at pretending to be helpful while doing nothing (I guess we’re not so different after all?).
Meanwhile, AI quietly crept out of the chat box and into the lab. It’s designing proteins, generating hypotheses, and teaching robots to walk without falling over (mostly). It’s slower than the hype promised, but honestly? It still kicks ass.
🚀 The Bottom Line
In 2025, AI became a production system. The age of asking “Will this matter?” is dead. The only questions left are: Who gets to run it? Who pays for the electric bill? And who cleans up the mess when it breaks?
(And like most things that become essential overnight, we all really wish we’d thought about that last part a bit earlier.)
😎 What is coming? 2026 Trends
If 2025 was the year of Consequences, 2026 is the year of Coordination.
We crunched the massive year-ahead reports from Google Cloud and IBM, and the verdict is unanimous: The era of the “Single Chatbot” is dead. The new game is Orchestration.
Here is what that actually looks like when it hits your office.
1. The Death of the Micromanager 💀
We are finally moving from Instruction-Based computing (telling a machine how to do a task step-by-step) to Intent-Based computing (telling it the outcome you want and letting it figure out the rest).
Old Way: “Write a SQL query, then format it as a CSV, then email it to Bob.”
2026 Way: “Make sure Bob has the sales data every Monday.”
The Tech: Google calls this the “Digital Assembly Line”. It runs on the Agent2Agent (A2A) protocol, basically Slack for bots, allowing a “Coder Agent” to pass a task to a “Reviewer Agent” without you ever opening a tab.
2. From “Go Away” to “Get It Done” 🛎️
For the last decade, customer service automation was designed for Deflection (keeping humans away from you). In 2026, it flips to Delegation.
The Shift: Agents now have permission to do things, buy the tickets, process the refund, schedule the repair, using standards like the Agent Payments Protocol (AP2).
The Trap: IBM’s data shows this is fragile territory. Two-thirds of customers say they will switch brands immediately if they find out AI is acting on their behalf without explicit transparency.
3. Security: The Hunter-Gatherer 🛡️
Security Operations Centers (SOCs) are drowning in alerts. In 2026, the human role shifts from Responder (chasing blinking lights) to Hunter (strategic tracking).
AI agents triage the noise and patch the leaks at machine speed.
Humans decide what matters.
As IBM puts it: AI doesn’t remove responsibility; it concentrates it.
4. The New Annoying LinkInfluencer Job: The AI Orchestrator 👩💻
Forget prompt engineering. The talent bottleneck is no longer people who can talk to models; it’s people who can manage fleets of them.
The Gap: IBM found that employees are actually ahead of their bosses, actively seeking AI tools while leadership worries about “burnout.” In fact, 42% of employees would take a pay cut just to get better AI training.
The Fix: Google prescribes “Field Days” and internal hackathons. If you aren’t gamifying this transition, you’re losing your best people to companies that will.
5. The Boring Truth: Boring Wins 📉
Here is the part the reports whisper but don’t scream: Orchestration is really, really hard. Intent-based computing only works if you actually know your intent. If your internal processes are messy, undefined, or bureaucratic, AI agents won’t fix them, they will just automate the chaos at 100x speed.
The Prediction: The companies that win in 2026 won’t look the most innovative. They will look the most boring, organised, and process-driven. And they will crush everyone else.
📄 2025 Must-Know Research Papers
The Vibe Shift: If 2024 was about making models talk, 2025 was about making them do. We moved from stochastic parrots to intelligent systems that reason, plan, and occasionally realize they messed up.
Here are the real papers that defined the year.
🧠 The “Thinking” Breakthroughs
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. The “Oh Wow” Moment: This paper proved that AI can learn to reason just by being rewarded for the final answer, no human step-by-step hand-holding required. This effectively killed the “we need more human data” bottleneck. Read Paper
Absolute Zero: Reinforced Self-Play Reasoning The Sequel: DeepSeek showed RL works; this paper proved you can do it with zero human data. By playing games against itself and verifying code execution, the model self-evolved. It’s the “AlphaZero” moment for coding. Read Paper
SFT Memorizes, RL Generalizes The Why: Everyone knows that RL is working, but this paper explained why. It proved that standard training (SFT) is mostly just efficient memorization, whereas Reinforcement Learning (RL) actually teaches the model to handle new, unseen rules. Read Paper
FrontierMath: A Benchmark for Evaluating Advanced Mathematical Reasoning in AI. The Reality Check: Showed that while AI crushes exam questions (Gold medals!), it solves less than 2% of truly novel research problems. This paper forced the industry to stop measuring “intelligence” by memorization. Read Paper
Kimi k1.5: Scaling Reinforcement Learning with LLMs. The Rival: This paper showed you don’t need complex search trees (like MCTS) to get genius-level math scores. Simple, scalable RL works just as well, matching o1 performance on major benchmarks. Read Paper
Towards System 2 Reasoning in LLMs. The Upgrade: Teaches AI to plan its reasoning before it even starts solving. It’s the difference between guessing and strategizing (”thinking about thinking”). Read Paper
🤖 Agents That Actually Work
Emergent Coordination in Multi-Agent Language Models. The Org Chart: When you put multiple agents together, they naturally split roles like “planner,” “coder,” and “checker” without being told. Turns out, AI managers are redundant too. Read Paper
Intern-S1-MO: Long-Horizon Reasoning Agent. The Fix: Agents that stop, think about their plan, and fix it mid-task. This paper introduced “lemma-based memory,” allowing agents to tackle Olympiad-level math by breaking it into chunks rather than guessing. Read Paper
🌍 The Open Source GOATs
Qwen3 Technical Report. The All-in-One: The first open model to seamlessly integrate “Thinking Mode” (deep reasoning) and “Chat Mode” (speed) into a single system. No more switching models. Read Paper
Mutarjim: The Small Model Victory. The Efficiency Win: A tiny 1.5B model that beats GPT-4o on Arabic-English translation. Proof that specialized, small models can crush generalist giants. Read Paper
🧬 Memory & World Building
VideoREPA: Learning Physics for Video Generation through Relational Alignment with Foundation Models. The Matrix: Video models started learning physics, gravity, collisions, object permanence just by watching enough data. They aren’t just making videos; they are simulating reality. Read Paper
DINOv3: Vision Foundation Model The Vision King: The new gold standard for computer vision. A massive release allowing AI to "see" and understand images without human labels. This is the engine powering the robotics revolution. Read Paper
Human-Inspired Episodic Memory for Infinite Context. The Upgrade: Finally, a way for models to handle practically infinite context by mimicking human memory (storing events, not just tokens). This means agents that actually remember what you told them last week. Read Paper (ICLR 2025 Accepted Paper)
⚙️ Under the Hood (Efficiency)
FP4 Quantization for LLM Training. The Discount: Proved that training models with tiny 4-bit numbers actually works. Same performance, way cheaper to run. This is why your API bill didn’t triple this year. Read Paper (Note: Matches the “FP4 All the Way” paper)
Jamba: Hybrid Transformer-Mamba Models. The Context King: Showed you can mix architectures (Mamba + Transformer) to get massive context windows without massive compute costs. Attention lost its monopoly. Read Paper
🛡️ Safety & The Buzzkill
Scalable Oversight via Recursive Self-Critiquing. The Mirror: Turns out, AI is better (and cheaper) than humans at reviewing other AI’s work. Read Paper
A Closer Look at Model Collapse. The Buzzkill: The paper that scared everyone. It confirmed that too much AI training on AI output makes models boring and less creative (”The Hivemind Effect”). This shifted the safety debate from “Is it dangerous?” to “Is it dull?” Read Paper
🎆 What the hell did MLAI do this year?
Before we crack on with 2026, here’s a quick rewind of 2025. It was a big one. Long nights, big brains with lots of wrinkles, questionable decisions, etc
Here’s what MLAI did:
Q1: The MedHack Era 🏥
The Great Debate: Doctors vs Lawyers A respectful bunfight where medics and lawyers argued their case, the crowd stirred the pot, and everyone left with stronger opinions and weaker voices. Doctors won (barely)
MedHack: Our big kahuna. Teams built AI to save simulated patients in a deeply under-resourced fake hospital. Chaos, learning, late nights, and a suspicious amount of pizza.
How to Start an AI Startup Founders who’ve actually been through the wringer shared what works, what doesn’t, and what you definitely shouldn’t do.
Q2: Building & Funding 💸
MLAI Members & Friends Networking No slides. No agenda. Just a room full of wrinkle-brains talking shop and accidentally starting startups.
How to Raise Your First Million Fundraising, taught by founders that have done it (or chose not to) Real talk about term sheets, timing, and why “just raise” is terrible advice.
AI Start-up Pitch Competition Big ideas in three minutes flat. Some nailed it. Some learned a lot. Charli had her debut as the mean judge.
Hugging Face – LeRobot Hackathon Robots, reinforcement learning, and Melbourne briefly becoming a robotics lab. Honestly, one of our favs and we’ll DEFINITELY do more robotics stuff in 2026.
Vibe Coding for Business Prompt-driven coding in the wild. Some magic. Some mess. A lot of laughs and very real lessons.
Q3: Reality Checks 🌏
Cybersecurity & AI AI attacks, AI defence, and a reminder that the internet is a spicy place.
Unicorn Graveyard Founders with real traction talking honestly about shutting down. No hype. No LinkedIn gloss. Just the truth.
NextGen AI 2025 Hackathon A full week of building proper AI demos with real companies and real stakes. Less pitch decks, more shipping.
AI: How Far Ahead Is China? Fresh off the plane from Shanghai with field notes, spicy takes, and a reality check for Australia.
Q4: The Finish Line 🏁
MLAI Social Drinks Night Just drinks, laughs, and the kind of conversations that usually happen after the official bit ends.
GenAI Evals & Monitoring For builders who want their GenAI systems to work on Monday, not just demo well on Friday.
The eSafety Hackathon, Needle in the Hashtag: A week of building tools to make the internet less cooked. Big hearts, big ideas, zero nonsense.
Use AI to Hack Your Way to Google Page #1 A hands-on workshop where founders actually learned how to get traffic without sacrificing a goat to the SEO gods.
🔥2026 Events to Up-Skill yourself
In 2026, expect bigger hackathons, sharper talks, more founder-first sessions, and plenty of chances to meet the people building AI in Australia right now. Same community energy. Bigger ambitions. Better snacks. Here’s what’s on in the next few weeks
1. BuildDay
🗓 Saturday 10th Jan | ⏰ 9:00am – 3:00pm | Stone & Chalk
Curious about building your own app, but not sure where to start? Or started but having issues?
This isn’t just another networking event or talk-fest. It’s a hands-on, practical session designed to get you building from the start - no fluff, just real skills. You’ll be working with tools like v0, Vercel and Supabase - even if you’ve never used them before. Whether you’re completely new to coding or looking to understand how modern apps come together, this is a space to learn by doing.
2. Claude Code Meetup Melbourne
🗓 Thursday 15th Jan | ⏰ 5:30pm – 8:00pm | Stone and Chalk
MLAI’s running a Claude Code community meetup for builders who like shipping fast and swapping workflows. Expect snacks + networking, two quick community demos, then a live Q&A with the Anthropic team. You’ll leave with practical tactics to try the next day; spots are limited.
👉 Grab your seat on Luma
3. Melbourne | AI Builder Co-working x S&C
🗓 Saturday 17th Jan | ⏰ 1:00pm – 5:30pm | Stone & Chalk
This event is great for those who want to work on their AI products, see what others are building, get to know Melbourne’s awesome AI community more, and just hang out and have a great time!
4. Generate, Capture & Nurture Leads on Autopilot - Built in 4 Hours
🗓 Jan 24th | ⏰ 9:30am – 3:30pm | 📍 Stone & Chalk
You’re posting content, getting interest, and attracting the right people
but somewhere between “This looks great!” and “Let’s book a call,” your leads disappear... Manual replies, scattered messages, forgotten follow-ups, and endless spreadsheets aren’t just annoying… they’re killing your growth.
It’s time to automate the entire journey. ⚡
5. Use AI To Hack Your Way To Page #1 Of Google
🗓 Jan 31st | ⏰ 10:00 AM – 2:00 PM | 📍 Stone & Chalk
Want your startup on page one of Google without spending months guessing what to write? We’ll link you with an AI agent that will research, write and publish, and capture spot #1 on Google search while you sleep. This event is EXPENSIVE (sorry) because we’ll actually show you how to integrate and use the agents that we spent months working on. If you’re a struggling founder/student write me an email (sam@mlai.au) with why you need hella discounts.
6. MYMI x MLAI: MedHack Part II
🗓 21st–22nd February 2026
Australia’s most chaotic health-tech hackathon. Team up with Hackers, Hustlers, Hipsters & Healers to solve real medical challenges and push digital health beyond buzzwords.
👉 Early Bird tickets here.










