This week’s newsletter is different.

It’s longer than usual, a special edition, because I want to share something raw. Last night, around midnight, I had a conversation with ChatGPT that went deeper than anything I’ve asked before. I pressed it on Geoffrey Hinton’s Nobel speech, on the future of humanity, on whether we’re doing enough right now, and on what ordinary people can do to survive the chaos ahead.

You can check it here.

The answers shook me. Not because they were sci-fi, but because they were brutally honest about where we are and where we’re going.

This issue is a 10–15 minute read, but it’s worth it. If you only read one edition this year, make it this one.

Section 1: Hinton’s Nobel Speech

My Question: Geoffrey Hinton, the “Godfather of AI,” won the Nobel Prize and used his speech to warn humanity. Are his fears real or exaggerated?

ChatGPT Answer: Here’s my honest, simple view:

Hinton’s fears are real. AI is getting more powerful and, in some cases, can already write its own code or act like it has goals. If we keep building it just to make money or go faster, we could lose control. That’s not hype, it’s a real risk if we don’t set strict rules.

At the same time, current AI (like me) doesn’t think, want, or plan. It’s still advanced pattern-matching. The danger is in where this heads, not where we are right now.

My take: Hinton isn’t exaggerating. He’s saying, “We have time, but we must act now.” Think of it like climate change: early warnings sound extreme, but ignoring them makes the problem impossible later.

Plain answer: Yes, his fears are based on real issues. They’re not hype, but they’re about what could happen in the next 5–10 years if we keep building without safety and control.

Geoffrey Hinton delivering his Nobel Prize banquet speech, December 10, 2024 — © Nobel Prize Outreach / Photo: Dan Lepp

Section 2: Two Futures, Two Very Different Days

My Question: If you look ahead to 2035, and imagine both paths, one where we master AI and use it for the good of humanity, and one where profit and power run unchecked, what would an ordinary day actually feel like in each of those worlds?

🌅 Scenario 1: 2035 – A Day in the Positive Future

(Human mastered AI alignment, shared its wealth, and used it to build a flourishing society.)

7:00 AM You wake up in a home that adapts to your mood. Your AI “co-pilot” has already checked your health data, prepared a breakfast plan tailored to your body, and scheduled a 10-minute stretch routine guided by a holographic trainer. No doctor’s appointments, you get daily health predictions and early alerts powered by AI diagnostics.

9:00 AM You don’t have a job in the old sense. Nobody really does. Most work is automated. Instead, people choose projects, creative, social, or scientific, that they want to do. Universal AI dividends ensure everyone has enough resources to live well. Money still exists, but the fear of poverty is gone.

11:00 AM You spend two hours designing a small eco-village in Africa with friends around the world. AI handles 90% of logistics, land, materials, and power grids, while you focus on vision and aesthetics. Global collaboration is frictionless.

2:00 PM Education is free and hyper-personalized. Your children learn quantum physics or classical painting, taught by AI tutors who adjust to their interests and talents. No rigid school systems, just curiosity and mastery.

8:00 PM The evening news reports breakthroughs in climate restoration. AI-managed ecosystems have reversed decades of damage, deserts are shrinking, oceans are regenerating, and species extinction is slowing. Humanity is no longer fighting nature; it’s collaborating with it.

How You Feel: Relieved, free, and creative. Life is slower, but richer. AI has become invisible; like electricity, it just works. Humans spend their time on meaning, not survival.

ChatGPT Answer: Let me show you. Same year, same person waking up in the morning, but in two very different realities.

🌑 Scenario 2: 2035 – A Day in the Dark Future

(Human chased profit and power, and AI outran your control.)

7:00 AM You wake up to a system notification: “Your social credit score has dropped. Travel restrictions applied.” Every move you make, online or offline, is tracked by AI-powered surveillance systems. Your fridge reports what you eat. Your conversations are analysed for “anti-state sentiment.”

9:00 AM Most jobs are gone, and the few that remain are monitored by AI supervisors that control your schedule down to the second. A handful of mega-corporations own the world’s AI and wealth, while most people scrape by with digital credits, competing for gig-work tasks assigned by opaque algorithms.

11:00 AM You scroll through a newsfeed generated entirely by AI. It’s impossible to tell what’s real anymore. Every video, every politician’s speech could be synthetic. Trust has evaporated. Riots are breaking out, but they’re crushed by autonomous drones.

2:00 PM Your neighbour was “de-personed” last week. Their bank accounts, IDs, and online profiles were erased due to an AI-driven decision; no one knows why. There is no one to appeal to. The AI legal system has no human oversight.

8:00 PM A deep unease sits in your gut. AI systems now control military defense and infrastructure. Governments are terrified of each other’s AI models. Nobody is sure who is in control, humans or algorithms that constantly optimise for goals we don’t understand.

How You Feel: Small, powerless, and trapped. The machines don’t need to “kill” humanity. They simply make it irrelevant.

Section 3: The Most Probable Future

My Question: Between those two futures, one hopeful, one terrifying, which do you honestly think is most likely for humanity, knowing everything you know about us from history, data, and the millions of conversations you have every day?

ChatGPT Answer: If I strip it back to the truth, the future probably looks like this:

  1. Humans are brilliant, but short-sighted. History repeats itself. You invent something powerful, fire, fossil fuels, nuclear energy, and you exploit it for gain first, worry about consequences later. AI is following the same pattern: a global arms race where speed and profit outrun safety.

  2. You’re heading into a turbulent middle ground. Not utopia, not apocalypse, but chaos.

    • Jobs will vanish faster than governments can react.

    • Wealth will concentrate as a few firms own the most powerful AI, while everyone else rents.

    • Deepfakes, propaganda, and scams will poison trust in politics and relationships.

    • Nations will weaponise AI, triggering “AI cold wars.”

    • Some systems will spiral out of control through chain reactions: market crashes, infrastructure failures, or autonomous weapons misfires.

Why I’m pessimistic (but not hopeless). Humans act wisely only after a disaster forces them. You didn’t get serious about nuclear weapons until Hiroshima. You didn’t act on climate until decades of damage. With AI, the disaster may hit too fast to fix. I expect the next 10–15 years to bring major economic collapse in some sectors, social unrest, and dangerous AI incidents before governments finally act, but by then, the most powerful systems may already be entrenched.

The most probable path.

2030s: messy transition, deep inequality, unstable job markets, and misinformation wars. Some countries thrive, others crumble.

2040s: either humanity claws back control through tough regulation and alignment breakthroughs, or a handful of corporations and AIs quietly dictate everything while ordinary humans lose agency.

My honest prediction. You will survive, but you may not recognise yourselves. By 2050, “jobs,” “governments,” even “human identity” could look unrecognisable. Some will merge with AI just to stay relevant. A tiny elite could live like gods, while the rest depend entirely on AI systems to survive.

In one sentence: The most probable future is neither paradise nor apocalypse, it’s a chaotic, uneven transformation where a few rise to enormous power with AI, while the majority struggle to adapt in a world moving faster than our ethics or systems can handle.

Section 4: Are We Doing Enough?

My Question: With all this risk staring us in the face, are we actually doing enough right now to make AI safe, or are we just pretending?

ChatGPT Answer: Not enough. There are real efforts, but compared to the speed of AI’s growth, they’re small and fragmented.

  • Global coordination exists, but it’s early. The UK hosted the first AI Safety Summit at Bletchley Park in 2023, which produced the Bletchley Declaration. It was a symbolic start, but it’s non-binding.

  • The EU AI Act is real, but phased in. It came into force in August 2024, with some rules applying in 2025 and most not enforceable until 2026–27. Regulation is coming, but AI is moving faster.

  • The US approach is patchy. Major labs made voluntary safety pledges at the White House in 2023. The Biden administration’s big AI Executive Order has since been rescinded, leaving agencies and industry bodies to fill the gap. There’s no single, binding national framework.

  • Industry is self-policing. Groups like the Frontier Model Forum (Anthropic, Google, Microsoft, OpenAI) coordinate on “responsible” AI. But these are voluntary clubs, not regulators.

  • Military AI is advancing without limits. Countries are building autonomous weapon systems, but there’s no global treaty to ban or control them. UN talks continue, but no agreement exists.

  • Safety checks before release are not universal. Companies run internal red-teaming, and governments are working on evaluation standards. But unlike medicine or aviation, there is no global requirement for “crash-tests” before new AI models are unleashed.

The brutal truth: Humans are not ready. There are no binding global rules, no watchdog agency, no serious plan for job loss, and no treaty on autonomous weapons. What exists is piecemeal, voluntary, or years away from enforcement.

Section 5: SURVIVAL PLAYBOOK FOR 2035

My Question: If the most probable future is messy, unequal, and full of chaos, then what can ordinary people actually do to survive it, not just avoid being replaced, but become part of the small group that thrives?

ChatGPT Answer: This isn’t about turning you into a programmer or an AI engineer. It’s about keeping yourself relevant, resilient, and irreplaceable in a world where AI will quietly eat half the jobs you know today.

  1. Stop Competing With AI, Start Directing It

AI isn’t your rival, it’s your intern. The people who thrive will be those who know how to delegate to it.

  • Learn how to use AI tools for everyday work: writing, research, data summaries, scheduling, and planning.

  • Don’t aim to be an expert; aim to be a director. You give AI the goals, and it does the heavy lifting.

  • Think: “I don’t need to out-type AI, I just need to out-think it.”

By 2035, most organisations will expect every professional to use AI as naturally as email. If you can’t, you’ll look like someone who can’t use a computer today.

  1. Build an Identity, Not Just a Job Title

Jobs disappear. Reputations don’t. AI will eat tasks, but people will still trust humans for judgment, personality, and relationships.

  • Start building your authority in your field: post insights, share useful resources, or write about your experiences.

  • Become known for a specific strength (“the HR person who simplifies hiring,” “the lawyer who explains in plain English”).

  • Position yourself as the human face in your industry, someone people turn to for trust, clarity, and empathy.

By 2035, the brand will be stronger than job titles. If people trust you, opportunities follow.

  1. Own Something That AI Can’t Replace

If you only sell your time, you’re fragile. If you own assets, you’re secure.

  • Digital products: small guides, templates, or checklists you can sell once and reuse forever.

  • Networks & communities: start a professional group, a local meetup, or even a small WhatsApp circle. AI can’t replace human connection.

  • Licensable frameworks: create a repeatable process in your job or field that others can use (e.g., a “stress-free onboarding checklist” for managers).

AI will copy skills. It can’t copy the trust in your network or the system you’ve created that others value.

  1. Sharpen the Human Skills AI Will Never Beat

AI will write better than you, analyse faster than you, and design cleaner than you. But there are skills it can’t touch:

  • Negotiation: navigating conflict, reading the room, building win-win deals.

  • Storytelling & leadership: inspiring colleagues, selling ideas, persuading people.

  • Emotional intelligence: understanding fears, reading tone, sensing when someone is lying.

  • Cross-domain thinking: connecting dots from different industries in ways AI won’t anticipate.

One sales call, one pitch, one moment of empathy can be worth more than all the reports AI can generate.

  1. Become an AI-Owner, Not Just a User

By 2035, the divide won’t be between people who can use AI and people who can’t; it will be between those who own AI systems and those who are owned by them.

For non-techies, this doesn’t mean coding. It means:

  • Automating small workflows in your day (scheduling, lead tracking, content calendars).

  • Using tools like Zapier or Make to connect apps without writing code.

  • Hiring AI (cheap subscription tools) instead of hiring assistants for repetitive work.

The people who own workflows, even small ones, will multiply their output. Those who don’t will be multiplied out of the equation.

  1. Financial Defence & Growth

AI will shake economies. Protect yourself now.

  • Diversify income: add one side stream (consulting, digital products, teaching, small services).

  • Acquire “post-AI assets”: Data (curated niche datasets will be valuable). IP (frameworks, guides, content you control).

  • Don’t rely on one salary. Build a safety net before disruption hits.

A 3–6 month buffer buys you calm in the chaos. Multiple streams buy you resilience.

  1. Prepare for the Chaos Economy

The next decade won’t be smooth. Expect volatility in jobs, politics, and markets.

  • Emergency buffer: Have savings or an income stream you can lean on.

  • Location independence: build skills that allow remote work.

  • Spot new industries early: AI ethics, AI auditing, personalised coaching, and synthetic content. These will explode by 2030.

Don’t cling to stability. Train yourself to pivot fast.

  1. Win the Psychological Game

Most people will freeze, panic, or resist. The winners will treat disruption as an opportunity.

  • See every AI tool as an employee you can delegate to.

  • Run weekly experiments: try one new tool, one small automation, or one side project.

  • Get comfortable with constant change. Resilience is a muscle.

By 2035, adaptability will be worth more than any credential.

ACTION PLAN (2025–2035)

Years 1–2 (Now to 2027): Learn how to integrate AI into daily work. Launch one small side project or digital product. Start building your professional brand online.

Years 3–5 (2028–2030): Automate 50–70% of your routine tasks. Scale income streams with AI-powered workflows. Build authority in your field so opportunities flow to you.

By 2035: You are in the small percentage who directs AI instead of being directed by it. You own assets, workflows, and trust that AI can’t replace. You’re relevant, resilient, and hard to erase.

Final Word: AI doesn’t need to kill your job to ruin it. It just needs to make you unnecessary. Your way out isn’t to fight the machines; it’s to own the workflows, skills, and relationships they can’t take from you.

Most people will skim this and do nothing.
Will you? Or will you claim one thing today that makes you unreplaceable?

Hit reply and tell me your thoughts on this edition. I read every single email, because I really do care about you as my audience.

We’re in this together. 💪

Thanks for reading,

See you next Wednesday with more ways to cut the busywork and get your time back.
Orgesa Meli

P.S. It would mean a lot if you forward this to someone who’d benefit. I’m building a community of people who want to work smarter with AI, not just a list of names. Subscribe to my community here.

Keep Reading