Stable Diffusion 3.5: 2X Faster with 40% Less GPU Memory
Stable Diffusion 3.5 now runs 2X faster with 40% less VRAM on RTX cards — no new GPU needed. Stability AI partners with Warner Music, Universal, and EA.
Stability AI just pushed out the biggest upgrade to Stable Diffusion in months — and buried it beneath a wave of announcements that include Hollywood VFX hires, two major music label deals, and a new end-to-end creative platform called Brand Studio. The technical headline: Stable Diffusion 3.5 now runs 2X faster and uses 40% less video memory on NVIDIA RTX graphics cards, with no new hardware required.
That speed jump matters immediately for anyone running local image generation. But the larger shift is strategic: Stability AI is assembling the people, partnerships, and compliance certifications needed to become the AI engine inside professional creative workflows and AI automation pipelines — not just a model anyone can download.
Stable Diffusion 3.5: 2X Faster, 40% Less GPU Memory — Here's What Changed
The performance boost comes from NVIDIA TensorRT optimization (a software layer that restructures how AI models process data on NVIDIA GPUs, squeezing significantly more performance without changing the underlying model weights). For Stable Diffusion 3.5 users on RTX hardware, this means:
- Image generation takes roughly half the time it did before on RTX GPUs
- Models that previously required 24GB VRAM now fit in substantially less memory
- Enterprise servers handle more concurrent generation requests per GPU — cutting cloud bills
- AMD users also benefit: Stability AI announced optimized support for Radeon GPUs and Ryzen AI APUs (AMD's chips with built-in AI processing cores)
Stability AI also launched Stable Diffusion 3.5 NIM — a NIM microservice (a containerized, pre-packaged AI model that enterprise developers can deploy like any software service, with standardized inputs and outputs) — through NVIDIA's enterprise platform. This makes the model accessible to IT teams who need deployment that fits existing infrastructure, not just developers comfortable with raw model weights.
The Oscar-Winning VFX Director Who Built Avatar's Effects Just Joined Stability AI
Robert Legato isn't a name most people outside Hollywood know — but inside the film industry, his credits tell the story. He designed the visual effects for Avatar (the highest-grossing film ever made), Titanic, The Lion King (2019), and The Jungle Book. He's an Academy Award winner. He's now Stability AI's Chief Pipeline Architect.
The title matters. A pipeline architect doesn't just advise on creative direction — they design the workflows that connect disparate tools into a production system. In film VFX, a pipeline might move a single scene through 15 different software packages in a specific sequence. Legato's job is to build the equivalent for AI-generated creative content: a system where Stability AI's image, audio, and video models work together in sequences that professionals can actually use at production scale.
Alongside Legato, Ryan Ellis joined as SVP Head of Product. Ellis previously led product development at Unity — the game engine platform used by an estimated 80% of mobile game developers globally, serving 1.5 million developers. His background is in building tools that massive creative communities depend on daily, where reliability and professional-grade output aren't optional features.
These aren't advisory hires. They're operational leaders whose backgrounds point toward the same direction: Stability AI is building serious production infrastructure for professional creative workflows — competing less with AI startups and more with Adobe, Autodesk, and DaVinci Resolve.
Warner Music, Universal Music, and EA All Signed in the Same Week
The same announcement window brought three major industry partnership deals:
- Warner Music Group: Partnership to advance responsible AI music creation tools — one of the Big Three music labels, representing artists including Ed Sheeran, Bruno Mars, and Cardi B
- Universal Music Group: Co-development agreement for professional AI music creation tools — the largest music label globally, home to Taylor Swift, Drake, and The Weeknd
- Electronic Arts: Co-development of generative AI models and workflows specifically for game development — one of the world's largest game publishers (Madden, EA Sports FC, Battlefield)
The music deals are particularly notable. Labels have historically fought AI music companies over copyright and training data. WMG and UMG signing co-development agreements — not just licensing deals — suggests a shift toward frameworks that work for labels, rather than the adversarial approach that dominated 2023–2024. The specifics (royalty splits, artist protections, opt-out policies) aren't public yet, but the direction is clear.
Stable Audio 2.5, announced alongside these partnerships, is described as the first audio model built specifically for enterprise-grade sound production at scale. A companion open-source model, Stable Audio Open Small, was co-released with Arm — the processor architecture that powers 99% of smartphones globally — enabling audio generation that runs entirely on-device without an internet connection or per-generation subscription cost.
Brand Studio, SOC 2 Security, and the Generative AI Platform Push
The umbrella announcement is Brand Studio — an end-to-end creative production platform connecting Stability AI's full model suite. Specific pricing and launch details are sparse, but the positioning is clear: it targets advertising agencies, media companies, and entertainment studios that need high volumes of branded creative content from a single, compliance-verified workflow.
Supporting the enterprise push, Stability AI achieved SOC 2 Type II and SOC 3 compliance — security certifications (think of these as the enterprise equivalent of a health inspection certificate, verifying that a company's data handling, access controls, and privacy practices meet standards required by regulated industries). Without these, companies in finance, healthcare, and legal sectors typically cannot use a vendor's AI services, regardless of technical quality.
Additional enterprise developments:
- Stable Image Services on Amazon Bedrock: Bedrock is Amazon's managed AI platform (companies access AI models without running their own servers); adding Stability AI's image models here lets enterprise teams use AWS billing, compliance controls, and infrastructure they already have
- WPP strategic investment: WPP, one of the world's largest advertising holding companies managing campaigns for global brands, made an undisclosed strategic investment in Stability AI
- Stable Virtual Camera (research preview): Transforms a single 2D image into a 3D video with controllable camera movements — still in research preview (early-access testing stage, not production-ready), with no confirmed commercial launch date
- Stable Video 4D 2.0: Upgraded model for novel-view synthesis (generating what a scene looks like from a different camera angle using only a single video as input) — aimed at game developers and VFX teams
How to Get the Stable Diffusion 3.5 Speed Boost Right Now
If you're running Stable Diffusion 3.5 through tools like ComfyUI or AUTOMATIC1111, the 2X speed improvement requires the TensorRT-optimized pipeline — it won't appear automatically from a model weight update alone. Check your software's GitHub page or settings for TensorRT support status. Stability AI's platform and Amazon Bedrock (for AWS teams) offer the model with optimization enabled out of the box.
For audio work, Stable Audio Open Small is worth exploring if you're building mobile apps or tools that need background music and sound effects without ongoing API costs. Running on Arm chips means it works on the same hardware already inside every iPhone and most Android devices — audio generation without a server bill.
Watch Brand Studio closely when full capability details emerge. Stability AI is assembling a cast — an Oscar-winning film VFX architect, a Unity platform veteran, two major music labels, EA, WPP, and Amazon — that suggests the next major release will define whether AI creative production becomes a tool professionals actually depend on, or stays in the demo phase. The infrastructure groundwork laid this week makes the former look increasingly plausible. You can start exploring today at stability.ai or through our AI tools roundup.
Related Content — Get Started | Guides | More News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments