0:00 / 0:00

Ojito con Soul Cinema 🤩🎥 Comenta “SOUL” y te envío los prompts 📨🔥 El modelo de imágenes de @higgsfield.ai recibe una actualización y ahora desde Soul Cinema se pueden crear escenas realmente cinematográficas. En estos videos te muestro 10 ejemplos que parten de imágenes creadas con Soul Cinema y y Nano Banana 2. Mismos prompts con ambos modelos. Sinceramente me sorprendió bastante lo bien que lo hace este modelo para crear este tipo de escenas. Como punto débil hay que decir que no tiene una gran adherencia al prompt en comparación con NB2. Además, con Soul ID y Soul HEX se pueden crear imágenes con nuestro rostro de forma súper precisa y controlar el color de las imágenes entre las escenas, respectivamente. Muy interesante este modelo de Higgsfield 👌🏽 Por cierto, dos cosas más. Esta comparación deja mal a NB2, pero esto seguro que con el prompt adecuado se pueden conseguir imágenes muy top. Y en segundo lugar, todas las imágenes fueron animadas con Kling 3.0… qué modelo de vídeo más bestia 🥹

How pabloprompt Made This Higgsfield Soul Cinema AI Video Comparison Tutorial — and How to Recreate It

This case study analyzes a high-performing "A/B Comparison" video featuring two distinct AI video generation models: Higgsfield Soul Cinema and Nano Banana 2. The visual centers on a cinematic, melancholic piano performance set within a moody, dimly lit bar. The video leverages the "dark academia" and "jazz noir" aesthetics—characterized by deep shadows, selective overhead lighting, and a rich, textured color palette (teal/green for the top frame, warm/neutral for the bottom). By presenting a side-by-side comparison of how different AI engines interpret the same prompt, the creator taps into the "tech-curiosity" niche, inviting viewers to judge the fidelity, motion, and "soul" of the generated content. This format is a goldmine for engagement, as it naturally prompts comments regarding which version looks more "real" or "cinematic."

What You’re Seeing: A Visual Breakdown

The video is a vertical split-screen comparison. Both shots feature a middle-aged man in a side profile, intensely focused on playing a piano. The environment is a classic, upscale bar with shelves of liquor bottles blurred in the background, creating a shallow depth of field that emphasizes the subject.

Shot-by-Shot Breakdown (Estimated)

Time Range Visual Content Shot Language Lighting & Color Viewer Intent
00:00 - 00:05 Top: Asian man playing piano; Bottom: Caucasian man playing piano. Medium Side Profile; Static Camera. Top: Teal/Green tint, moody. Bottom: Warm, neutral tones. Establish the comparison hook immediately.
00:05 - 00:08 Focus on micro-movements: smoke rising (top), hand dexterity (bottom). Medium Shot; Shallow Depth of Field. High contrast; dramatic overhead "pool" of light. Demonstrate detail and motion consistency of the AI.
00:08 - 00:10 The music reaches a soft crescendo; subjects remain in character. Static Medium Shot. Consistent with previous shots. Reinforce the "cinematic" quality of both models.

Why It Went Viral: The Comparison Hook

The Power of "Which is Better?"

The core viral mechanism here is comparative evaluation. By labeling the models (Higgsfield vs. Nano Banana), the creator transforms a simple aesthetic video into a technical benchmark. This forces the viewer to stop scrolling and look closely at the details: the way the smoke curls in the top video versus the realistic hand movements in the bottom. Psychologically, humans are wired to categorize and rank; this video provides the perfect playground for that instinct.

The "Soul Cinema" Hype

The creator specifically mentions an update to "Soul Cinema" by Higgsfield. This taps into the "New & Improved" psychological trigger. For indie creators, staying ahead of AI tool updates is a full-time job; this video serves as a "quick-look" review, providing high value in a short timeframe. The aesthetic itself—a lonely piano player—evokes a strong emotional response (melancholy/sophistication), which increases the likelihood of shares to "mood" or "aesthetic" collections.

Platform Perspective: The Retention Loop

From an Instagram/TikTok perspective, this video excels at Watch Time. Because there are two things happening at once, viewers often watch the video twice—once to focus on the top half and once for the bottom. This "double-watch" signal tells the algorithm the content is highly engaging, pushing it to a broader audience. The caption "Comenta SOUL" (Comment SOUL) is a classic engagement bait that converts passive viewers into active commenters, further boosting the post's reach.

5 Testable Viral Hypotheses

  1. The Split-Screen Benchmark: If you show two versions of the same concept, viewers will stay 40% longer to decide which they prefer.
  2. The "Atmospheric Detail" Hook: Including a small, moving detail (like the rising cigarette smoke) increases perceived "realism" and save rates.
  3. The Technical Keyword Trigger: Using specific AI model names (Higgsfield, Luma, Sora) in the overlay attracts a high-intent, tech-savvy audience.
  4. The Aesthetic Contrast: Using a teal/orange or green/warm color contrast between the two clips makes the split-screen more visually stimulating.
  5. The Low-Friction CTA: Asking for a single-word comment ("SOUL") reduces the "cost" of engagement, leading to a higher comment-to-view ratio.

How to Recreate: Step-by-Step Tutorial

  1. Topic Selection: Choose a "Cinematic Archetype" (e.g., a detective in the rain, a chef in a busy kitchen, or a piano player in a bar). This video suits "Aesthetic" or "Tech/AI" accounts.
  2. Prompt Engineering: Create a detailed prompt focusing on lighting and mood. Example: "Cinematic side profile of a man playing piano, moody jazz bar, overhead spotlight, shallow depth of field, 4k, highly detailed."
  3. Model Selection: Use two different AI video tools (e.g., Higgsfield.ai for one, and Luma Dream Machine or Runway Gen-2 for the other) to get distinct "looks."
  4. Consistency Check: Ensure both models use the same basic character description (age, clothing, action) to make the comparison fair.
  5. Editing the Split: Use CapCut or Premiere Pro. Place the two clips in a vertical 9:16 frame. Add clear text overlays identifying each model.
  6. Sound Design: Use a melancholic, royalty-free piano track. Ensure the "vibe" of the music matches the slow, cinematic movement of the video.
  7. Color Grading: Enhance the "AI look" by adding a slight film grain or adjusting the contrast to make the shadows deeper.
  8. Publishing Strategy: Use a "Comment for Prompts" strategy to drive engagement. This automates lead generation if you use a tool like ManyChat.

Growth Playbook: Distribution & Scaling

3 Opening Hook Lines

  • "Is Higgsfield actually better than [Competitor]?"
  • "The new Soul Cinema update is a game changer for AI video."
  • "Stop using [Old Tool], try this for cinematic AI shots."

4 Caption Templates

  1. The Reviewer: "Testing the new Soul Cinema update vs Nano Banana 2. The lighting in the first one is 🤯. Which one do you prefer? Comment 'SOUL' for the prompts."
  2. The Aesthetic: "Late night jazz vibes, powered by AI. 🎹✨ It’s getting harder to tell what’s real. Which version feels more 'cinematic' to you?"
  3. The Tutorial: "How I created these cinematic shots in 5 minutes. 🎥 I used two different AI models to see which handles mood better. Full breakdown in my bio!"
  4. The Engagement Bait: "Top or Bottom? 👇 The AI video race is heating up. Comment your favorite and I'll send you my secret prompt list."

Hashtag Strategy

  • Broad: #AI #ArtificialIntelligence #Cinematic #Filmmaking (To reach general tech/art fans)
  • Mid-Tier: #AIVideo #HiggsfieldAI #DigitalArt #CreativeTech (To reach AI enthusiasts)
  • Niche: #SoulCinema #AIComparison #PianoAesthetic #IndieCreator (To reach specific tool users and aesthetic collectors)

Frequently Asked Questions

What tools make it look the most similar?

Using Higgsfield.ai and Luma Dream Machine with the exact same text prompt will yield the most comparable results.

What are the 3 most important words in the prompt?

"Cinematic lighting," "Side profile," and "Shallow depth of field."

Why does the generated face look inconsistent?

AI often struggles with micro-expressions; using a "side profile" shot like in this video helps hide facial inconsistencies.

How can I avoid making it look like AI?

Add a layer of real film grain and avoid high-speed movements which often cause "morphing" artifacts.

Is it easier to go viral on Instagram or TikTok with this?

Instagram Reels currently favors high-aesthetic "mood" content, while TikTok favors the technical "how-to" breakdown.

How should I properly disclose AI use?

Use the platform's built-in "AI-generated" tag and mention the tools used in the caption to build trust.