ChatGPT Gets Its Own App Store

AND: Zara swaps runway for runtime with AI edits

Welcome, Humans!

Ready for your daily dose of AI chaos? I’ve rounded up Today’s Top AI Headlines for those who like to stay ahead – and for the curious, I’ve got some eyebrow-raising stories Beyond the Headlines. Let’s dive in.

In a Nutshell:

  • OpenAI launches in-chat App Directory

  • Luma’s Ray3 lets you reshape video reality

  • Meta now segments audio like images

  • Zara uses AI to re-dress models digitally

  • James Cameron eyes AI for VFX workflow

🚀Today’s Top AI Headlines:

  1. OpenAI launches in-chat App Directory: OpenAI has launched a beta version of its new App Directory inside ChatGPT, marking a major step toward turning the chatbot into a full platform ecosystem. The directory allows users to access third-party apps directly within their conversations, eliminating the need to switch between external tools or browser tabs. Instead of leaving ChatGPT to complete tasks, users can now interact with apps seamlessly in-chat. For developers, this release opens the door to massive distribution. OpenAI says the App Directory gives approved apps potential access to ChatGPT’s 700 million weekly users, making it one of the largest AI-native marketplaces ever introduced. Developers can submit their apps for review, after which they may appear inside the directory for users to discover. Using the feature is simple: users can click the new “Apps” section in ChatGPT’s sidebar to browse available tools, or they can invoke apps directly by @mentioning them mid-conversation. This design keeps workflows fast and conversational, aligning with how people already use ChatGPT. Strategically, the App Directory positions ChatGPT closer to an operating system for AI-powered work, where specialized tools plug into a shared interface. If adoption grows, it could redefine how productivity software, AI tools, and services are distributed and monetized.

    Source: OpenAI

    🤖 Robi: “App store in a chatbot? Great, now I can install “Overthink Pro” during meetings.”

  2. Luma’s Ray3 lets you reshape video reality: Luma has released Ray3 Modify, a new AI video model designed to transform existing footage without losing the original performance. Unlike text-to-video systems that generate scenes from scratch, Ray3 Modify focuses on editing reality, allowing creators to reshape environments, costumes, and objects while preserving an actor’s movements, timing, and emotional delivery. Users can upload a video and prompt the model to alter specific elements, such as changing a background location, replacing outfits, or modifying props, while keeping the core performance intact. This makes Ray3 Modify especially powerful for filmmakers, advertisers, and content creators who want cinematic flexibility without costly reshoots. One standout feature is character reference support. By providing a reference image, creators can transform a real actor into a completely different character, enabling rapid visual experimentation while maintaining natural motion and expression. Ray3 Modify represents a shift toward performance-preserving AI, where the human element remains central and AI acts as a post-production multiplier rather than a replacement. This approach could significantly reduce production time and budgets while expanding creative control. As AI video tools mature, models like Ray3 Modify highlight a future where creators don’t start from zero, but instead refine, remix, and reimagine existing footage with precision.
    Source: X Post

    🤖 Robi: “Coming soon: AI that rewrites your childhood memories in 4K.”

  3. Meta now segments audio like images: Meta has expanded its popular Segment Anything Model (SAM) beyond images with the launch of SAM Audio, a new AI system designed to separate sounds using natural language prompts. The model allows users to isolate specific audio elements, such as speech, music, or background noise, from audio or video files simply by describing what they want to extract. SAM Audio is built as a generative audio separation system powered by a diffusion transformer. Instead of outputting just one result, the model produces both the target audio stream and the residual audio, giving users full control over what is kept and what is removed. This makes it useful for tasks like podcast editing, film post-production, music remixing, and accessibility tools. According to Meta, SAM Audio achieves state-of-the-art performance across multiple prompting methods, performing competitively even when evaluated against human judgment. To support further research and transparency, Meta is also releasing an open-source evaluation dataset for benchmarking prompted audio separation. By extending the “segment anything” idea to sound, Meta is pushing toward multimodal AI systems that can understand and manipulate different media types through simple prompts, bringing professional-grade audio editing closer to everyday users.
    Source: Meta

    🤖 Robi: “Next update: separate your boss’s voice from your will to live.”

🔍Beyond the Headlines:

  1. Zara uses AI to re-dress models digitally: Fast-fashion retailer Zara is adopting AI to digitally modify existing photos of models, allowing the company to change outfits and locations without conducting new photoshoots. Models are asked for consent and are paid standard fees, even though they do not return to set. Parent company Inditex says the technology is meant to complement, not replace, creative teams. Zara follows competitors like H&M and Zalando, which are also experimenting with AI-generated imagery to streamline marketing workflows and reduce production timelines.
    Source: Reuters

    🤖 Robi: “Fashion week just became “Photoshop hour.” RIP to runway drama.’’

  2. James Cameron eyes AI for VFX workflow: James Cameron says he is actively exploring AI as the next phase of his filmmaking career. In an interview with The Hollywood Reporter, the Avatar director revealed he is investigating AI tools, or even launching a company, to help VFX artists work more efficiently. Cameron emphasized he is not interested in “magic wand” systems that automatically generate finished images. Instead, he wants professional-grade tools that give artists greater control while lowering production costs, signaling a more artist-centric approach to AI in Hollywood.

    Source: The Hollywood Reporter

    🤖Robi: “Skynet creator now wants AI to save time. Irony.exe not responding.”

🤖Prompt of the Day:

Enterprise Climate Risk Management Framework

Prompt: You are a sustainability risk consultant advising corporations on climate exposure. Your task is to design a climate risk management framework for a [company size/type] in [industry].
Your framework should include: (1) physical and transition risk identification, (2) scenario analysis aligned with climate pathways, (3) integration into enterprise risk management, (4) mitigation and adaptation strategies, (5) disclosure and reporting alignment, and (6) KPIs such as risk exposure reduction, resilience score, and regulatory compliance rate.

🤖AI Tools You Didn’t Know You Needed:

Problem: Product teams struggle to plan, organize, and align strategy, dependency maps, timelines, and goals in a single place.

AI Solution: Modor uses AI to visualize plans, roadmaps, dependencies, and timelines from plain language inputs so teams can coordinate and execute more effectively.

AI Tool: Modor is an AI-augmented product management and planning platform that turns strategy, milestones, and tasks into visual maps, timelines, and dependency charts that teams can collaborate on.

Helpful Features

  • Visual Roadmaps: Build and adjust product roadmaps quickly.

  • Dependency Mapping: See how tasks and features relate and affect each other.

  • Milestones & Timelines: Plan releases and schedule work visually.

  • Collaborative Boards: Team editing, comments, and shared planning views.

Robi’s Hot Take on X