šŸš€ NotebookLM Goes Public!

ALSO: $30M Safety Lab & DeepSeek Accused of Using Rival AI Data

In partnership with

Hey, AI Visionaries! ā¤ļø

Today’s Insights:

  • NotebookLM Goes Public!

  • AI Pioneer Launches $30M Safety Lab

  • DeepSeek Accused of Using Rival AI Data

  • 5 Hand-Picked AI Tools

  • Prompt: Customer Support Automation

  • 5 Hand-Picked Resources & Guides

Read Time: Under 5 minutes

NEWS

Google’s NotebookLM Now Lets You Share AI Notes!

What's the story:

Google’s NotebookLM now lets anyone view and interact with your shared notes — including AI-made podcasts and summaries — through a public link.

Key Takeaways:

  • You can now share your AI-powered notebooks using a public link.

  • Viewers can listen to AI-generated audio overviews and ask questions.

  • The notes can include content from docs, slides, and even YouTube videos.

  • View-only users can’t edit, but collaborators can if invited by email.

  • Sharing works just like Google Docs — easy and familiar.

Why It’s Important:

This update makes learning and collaboration easier by turning notes into interactive tools. Whether you're a student, teacher, or team leader, sharing AI summaries and audio makes info quicker to understand and more fun to use.

šŸ¤ TOGETHER WITH MINDSTREAM

Get Your Free ChatGPT Productivity Bundle

Mindstream brings you 5 essential resources to master ChatGPT at work. This free bundle includes decision flowcharts, prompt templates, and our 2025 guide to AI productivity.

Our team of AI experts has packaged the most actionable ChatGPT hacks that are actually working for top marketers and founders. Save hours each week with these proven workflows.

It's completely free when you subscribe to our daily AI newsletter.

NEWS

AI Pioneer Launches $30M Safety Lab

What's the story: 

AI legend Yoshua Bengio just launched LawZero, a new lab to build AI that’s safer, more honest, and less likely to mislead users — with $30M in funding to get started.

Key Takeaways:

  • LawZero aims to build AI that doesn’t deceive or harm users.

  • Their first system, ā€œScientist AI,ā€ shares probabilities instead of claiming to be 100% right.

  • The lab can block harmful AI actions by predicting risky behavior.

  • It’s backed by $30 million from top tech donors like Eric Schmidt’s group.

  • Bengio believes copying human intelligence for AI is risky and needs a new approach.

Why It’s Important:

As AI becomes more powerful, it can also become more dangerous. LawZero is trying to fix that by making AI more trustworthy and less likely to make mistakes that could cause harm.

PRODUCTIVITY  

5 Hand-Picked AI Tools to 10x Your Productivity

āœ… Kin - Your private, emotionally intelligent AI companion—helping you plan, reflect, and grow with smart memory, voice chat, and secure on-device data.

āœ… SwiftlyAds - Lets you create stunning AI-powered 3D product shots, model photoshoots & ad creatives in seconds—no studios, no designers, no hassle.

āœ… SlidesPilot - Helps you create, convert, and edit PowerPoint presentations 10x faster with AI—generate slides from any topic, document, or video in seconds.

āœ… SwiftCover - An AI-powered tool that instantly generates tailored cover letters for any job—just paste your details and let the AI craft the perfect letter in seconds.

āœ… CodingFleet - An AI-powered coding assistant that transforms plain instructions into clean, efficient Python code in seconds—no need to write code from scratch.

Attention Founders & AI Builders!

Built an AI tool? Get it seen on the #1 AI marketplace.

NEWS

Did DeepSeek Train Its AI on Google’s Gemini?

What's the story:

Chinese AI lab DeepSeek may have trained its newest model using output from Google’s Gemini, sparking concerns about unauthorized data use and AI ā€œcopycatting.ā€

Key Takeaways:

  • DeepSeek’s new model, R1-0528, seems to mimic Gemini’s language and reasoning style.

  • Experts suspect the model was trained on Gemini outputs, but there’s no direct proof.

  • DeepSeek has a history of using rival AI outputs, including from ChatGPT.

  • OpenAI and Google are boosting security to stop this kind of model ā€œdistillation.ā€

  • The open web is filled with AI-made content, making clean training data harder to find.

Why It’s Important:

If true, this shows how hard it is to protect AI data — and how easy it might be to copy powerful models. It also highlights a growing arms race in AI security, ethics, and trust.

MORE AI & TECH NEWS

Everything else you need to know today

  • Photo Fail Timeout

    Google hit pause on its ā€œAsk Photosā€ AI tool, saying it’s too slow and glitchy. Meant to answer smart photo-related questions, the tool isn’t living up to its promise yet. A better version is coming soon—faster, sharper, and less likely to fumble your memories.

  • Claude Has a Blog

    Anthropic gave its AI, Claude, a blog called Claude Explains. It writes most of the content, but humans polish it up. Think of it as Claude doing the rough draft, and humans adding the final seasoning.

  • AI? More Like HI

    Builder ai claimed to use AI to build apps—but it was really 700 humans in India doing the work. The billion-dollar startup collapsed when the truth came out, and now it’s bankrupt.

֎ PROMPT OF THE DAY

Customer Support Automation

Prompt: Create a detailed conversational script for a chatbot serving our [industry] business, designed to manage first-level customer queries. Include automated replies for [frequent customer concerns], clear hand-off rules for [advanced scenarios], and dynamic intros using customer data like [order history, loyalty tier].

Source: FutureLens Team

šŸ“š TUTORIALS & GUIDES

  1. AI Prompting Guide for Business Leaders - Link

  2. AI for Data Science Projects: Vibe Coding Guide - Link

  3. 10 Steps to Adopting AI in Business - Link

  4. 7 Ways to Boost Your AI Coding Results - Link

  5. Pro Tip: Make ChatGPT Remember You - Link

šŸ’¬ YOUR FEEDBACK MATTERS

Before you go, we’d appreciate your feedback on today’s newsletter to help us improve and tailor FutureLens experience just for you!

Did you love it? Was something missing? Let us know!

Login or Subscribe to participate in polls.

If you have feedback, we’re listening! Reply to this email and let us know how we can improve.

PS: Missed yesterday's issue? Catch up here.

For real-time AI insights, follow us on LinkedIn

Signing off,

Until next time,

Golam Rabbani - AI Educator, Founder & Investor

Reply

or to participate.