0:00 / 0:00

Ojito con Soul Cinema 🤩🎥 Comenta “SOUL” y te envío los prompts 📨🔥 El modelo de imágenes de @higgsfield.ai recibe una actualización y ahora desde Soul Cinema se pueden crear escenas realmente cinematográficas. En estos videos te muestro 10 ejemplos que parten de imágenes creadas con Soul Cinema y y Nano Banana 2. Mismos prompts con ambos modelos. Sinceramente me sorprendió bastante lo bien que lo hace este modelo para crear este tipo de escenas. Como punto débil hay que decir que no tiene una gran adherencia al prompt en comparación con NB2. Además, con Soul ID y Soul HEX se pueden crear imágenes con nuestro rostro de forma súper precisa y controlar el color de las imágenes entre las escenas, respectivamente. Muy interesante este modelo de Higgsfield 👌🏽 Por cierto, dos cosas más. Esta comparación deja mal a NB2, pero esto seguro que con el prompt adecuado se pueden conseguir imágenes muy top. Y en segundo lugar, todas las imágenes fueron animadas con Kling 3.0… qué modelo de vídeo más bestia 🥹

How pabloprompt Made This AI Video Comparison Higgsfield Vs Nano Banana — and How to Recreate It

This case study analyzes a high-performing "AI Model Comparison" video that pits Higgsfield Soul Cinema against Nano Banana 2. The visual core is a cinematic, high-octane sequence of a rugged man riding a motorcycle down a desert highway at sunset. It leverages the "A vs. B" psychological trigger, forcing viewers to evaluate image quality, motion consistency, and lighting realism. With its warm, saturated "Soul Cinema" aesthetic contrasted against a gritty, detailed alternative, the video serves as both a tech demonstration and a visual mood board for indie filmmakers looking to harness AI for cinematic storytelling.

What You’re Seeing

The video is a vertical split-screen comparison. The top half features "Higgsfield Soul Cinema," characterized by an extremely vibrant, almost hyper-real sunset with deep oranges and reds. The bottom half, "Nano Banana 2," begins in a stylized black and white before transitioning into a more naturalistic, dusty color grade. Both shots feature a middle-aged man with a beard and leather jacket, riding a cruiser-style motorcycle directly toward the camera on a straight, cracked asphalt road.

Shot-by-Shot Breakdown (Estimated)

Time Range Visual Content Shot Language Lighting & Color Viewer Intent
00:00–00:03 Motorcycle approaching from distance. Low-angle tracking shot (moving backward). Top: Deep red sunset silhouette. Bottom: High-contrast B&W. Hook: Instant visual "battle" setup.
00:03–00:06 Subject becomes clearer; facial details visible. Medium shot, 35mm feel. Top: Intense lens flare. Bottom: Transition to color, dusty brown tones. Comparison: Evaluate facial consistency and skin texture.
00:06–00:10 Close-up on the rider and handlebars. MCU (Medium Close-Up). Top: Warm glow, soft edges. Bottom: Sharp details, realistic dust particles. Reinforce: Showcasing motion stability and "Soul" vs "Realism."

Why It Went Viral

The Power of the "Benchmark" Topic

This video taps into the "Tech Comparison" niche, which is currently exploding as AI models iterate weekly. By labeling the models clearly, the creator positions themselves as an authority or "tester." The choice of a motorcycle ride is strategic; it involves complex motion (spinning wheels, moving road, wind in hair, dust clouds), which are traditional "fail points" for AI. Viewers watch closely to see which model "breaks" first, driving up watch time and repeat views.

Platform Perspective: The Engagement Loop

From a platform algorithm perspective, this video is an engagement goldmine. The caption "Comenta ‘SOUL’ y te envío los prompts" (Comment 'SOUL' and I'll send you the prompts) is a classic Comment-to-DM automation trigger. This signals to Instagram that the content is highly interactive, pushing it to the Explore page. The "A vs. B" format naturally leads to "Top is better" or "Bottom looks more real" comments, further boosting the post's reach through community debate.

5 Testable Viral Hypotheses

  • The Comparison Bias: Presenting two options forces the brain to choose a favorite, increasing the likelihood of a comment. Replicate by: Splitting any two AI generations of the same prompt.
  • The "Secret Sauce" Gatekeeping: Offering the prompts only via comment creates a "value exchange" that users feel is worth the effort. Replicate by: Using ManyChat to automate prompt delivery.
  • Cinematic Archetypes: The "lonely rider at sunset" is a universal cinematic trope that evokes freedom and coolness. Replicate by: Using classic movie tropes (noir detective, sci-fi explorer).
  • The Motion Stress Test: Showing fast-moving objects (motorcycles) proves the AI's capability. Replicate by: Generating videos with high-speed motion or complex physics.
  • Visual Evolution: The bottom clip's transition from B&W to color acts as a secondary hook within the comparison. Replicate by: Using a simple "color grade" transition in editing.

How to Recreate: From 0 to 1

Step 1: Topic Selection

Choose a high-motion cinematic concept. For this style, "Man on motorcycle," "Car chase," or "Person running through a forest" works best because they test the AI's temporal consistency.

Step 2: Prompt Engineering (The "Master" Prompt)

Create a detailed prompt that describes the subject, wardrobe, and lighting. You must use the exact same prompt for both models to ensure a fair comparison. (See the Prompt Synthesis section below for the specific prompt used here).

Step 3: Generate with Model A (Higgsfield)

Upload your prompt to Higgsfield.ai. Select the "Soul Cinema" or "Cinematic" style preset. Focus on getting that warm, high-contrast sunset look.

Step 4: Generate with Model B (Nano/Luma/Kling)

Use the same prompt in a competing model. If you want the B&W effect, you can either prompt for it or apply a filter in post-production for the first 2 seconds.

Step 5: Ensure Character Consistency

If the models allow for an "Image-to-Video" or "Character Reference" (Cref), use a single high-quality AI-generated portrait as the starting point for both video generations.

Step 6: Vertical Split-Screen Editing

In CapCut or Premiere Pro, stack the two videos. Use a 9:16 canvas. Apply a thin white or black divider line between them to keep the UI clean.

Step 7: Text Overlays & Labels

Add clear, sans-serif labels (e.g., "MODEL A" vs "MODEL B") in the top corners. Use a bold font like Montserrat or Archivo to maintain a "tech-review" aesthetic.

Step 8: Audio & CTA

Add a cinematic soundscape (engine revs, wind, low bass hits). End the video with a text overlay: "Which one is better? Comment below."

Growth Playbook

3 Opening Hook Lines

  • "Is Higgsfield actually better than [Competitor]? Let's find out."
  • "The secret to cinematic AI video is finally here."
  • "Stop scrolling if you want to make AI videos that look like movies."

4 Caption Templates

  1. The Direct Comparison: "Higgsfield vs. Nano Banana 2. 🎥 The 'Soul Cinema' update is a game changer for lighting. Which side do you prefer? Left or Right? 👇"
  2. The Value Lead: "I spent 5 hours testing the new Higgsfield update so you don't have to. Here are the results. Want the prompts? Comment 'SOUL' below! 📨"
  3. The Technical Deep Dive: "Look at the dust physics in the bottom clip vs the lighting in the top. AI video is evolving too fast! Which model handles motion better? 🏍️"
  4. The Short & Punchy: "Cinematic AI is getting scary. 🤯 Top or Bottom? Choose your winner. #aivideo #filmmaking"

Hashtag Strategy

  • Broad: #aivideo #artificialintelligence #filmmaking #digitalart
  • Mid-tier: #higgsfield #cinematicai #videoediting #contentcreator
  • Niche: #soulcinema #aicomparison #motorcycleaesthetic #indiefilmmaker

Frequently Asked Questions

What tools make it look the most similar?

Use Higgsfield for the "Soul" look and Luma Dream Machine or Kling AI for the realistic textures.

What are the 3 most important words in the prompt?

"Cinematic," "Backlit," and "Tracking-shot."

Why does the generated face look inconsistent?

AI struggles with faces in motion; use a "Character Reference" image to lock the identity.

How can I avoid making it look like AI?

Add real film grain and slight camera shake in post-production to mask AI smoothness.

Is it easier to go viral on Instagram or TikTok with this?

Instagram is currently better for "aesthetic" cinematic AI, while TikTok favors "how-to" tutorials.