0:00 / 0:00

Ojito con Soul Cinema 🤩🎥 Comenta “SOUL” y te envío los prompts 📨🔥 El modelo de imágenes de @higgsfield.ai recibe una actualización y ahora desde Soul Cinema se pueden crear escenas realmente cinematográficas. En estos videos te muestro 10 ejemplos que parten de imágenes creadas con Soul Cinema y y Nano Banana 2. Mismos prompts con ambos modelos. Sinceramente me sorprendió bastante lo bien que lo hace este modelo para crear este tipo de escenas. Como punto débil hay que decir que no tiene una gran adherencia al prompt en comparación con NB2. Además, con Soul ID y Soul HEX se pueden crear imágenes con nuestro rostro de forma súper precisa y controlar el color de las imágenes entre las escenas, respectivamente. Muy interesante este modelo de Higgsfield 👌🏽 Por cierto, dos cosas más. Esta comparación deja mal a NB2, pero esto seguro que con el prompt adecuado se pueden conseguir imágenes muy top. Y en segundo lugar, todas las imágenes fueron animadas con Kling 3.0… qué modelo de vídeo más bestia 🥹

How pabloprompt Made This Higgsfield Vs Nano Banana AI Comparison Mad Scientist — and How to Recreate It

This viral case study examines a high-energy "Versus" comparison video between two AI video generation models: Higgsfield Soul Cinema and Nano Banana 2. The content features a classic "Mad Scientist" archetype—an elderly man with wild grey hair, a lab coat, and a red tie—celebrating a breakthrough in a cluttered, atmospheric laboratory. The visual hook relies on high-contrast cinematic lighting and dynamic blue electric sparks (Tesla coil effects) that sync perfectly with a manic, high-pitched voiceover. By positioning two different AI outputs in a split-screen format, the creator leverages the "tech comparison" trend to drive high engagement, comments, and saves from fellow creators looking for the next best tool in the AI video space.

What You’re Seeing: A Visual Breakdown

The video is a vertical split-screen comparison. The top half showcases "Higgsfield Soul Cinema," characterized by rich color, warm ambient lighting, and vibrant blue electrical discharges. The subject is an elderly scientist with glasses, whose facial expressions are fluid and emotionally charged. The bottom half features "Nano Banana 2," which adopts a more stylized, almost black-and-white "vintage film" aesthetic with higher detail in the background clutter (chalkboards, tubes, wires) but slightly different character features (goggles instead of glasses). Both scenes depict the same "Eureka" moment, with the character shouting in triumph as electricity crackles around him.

Shot-by-Shot Analysis

Time Range Visual Content Shot Language Lighting & Tone Viewer Intent
00:00–00:03 Scientist shouts "I did it!" with blue sparks flying. Medium Close-Up (MCU) High contrast, blue rim light from sparks. Instant hook: High energy and visual spectacle.
00:03–00:06 Scientist repeats "I finally did it!" with arms shaking. MCU, slight camera shake. Warm laboratory glow vs. cool electric blue. Reinforce persona: Establishing the "Mad Scientist" trope.
00:06–00:10 Head tilts back, shouting "They said it was impossible!" MCU, low angle feel. Dramatic shadows, flickering light. Emotional payoff: Completing the narrative arc.

Why It Went Viral: The "Versus" Psychology

The core of this video's success lies in the Comparison Framework. Humans are biologically wired to categorize and rank things. By placing two AI models side-by-side, the creator forces the viewer to make a choice, which naturally leads to high comment volume ("I like the top one better," "The bottom one has more detail"). This "A vs B" setup is a goldmine for platform algorithms because it increases watch time as users look back and forth between the two panels to spot differences.

Furthermore, the choice of the "Mad Scientist" theme taps into a Universal Archetype. It’s a character everyone recognizes from pop culture (Doc Brown, Frankenstein), making the content immediately relatable and "meme-able." The high-intensity audio—a manic, screaming voice—acts as a pattern interrupt in a feed of quiet or music-heavy videos, grabbing attention within the first 0.5 seconds.

From a platform perspective, the video triggers "Save" signals because it serves as a Technical Benchmark. Creators save the video to remember the names of the models (Higgsfield vs. Nano Banana) for their own future projects. The caption "Comment SOUL" is a classic engagement bait that automates lead generation or further information delivery, tricking the algorithm into seeing a high "comment-to-view" ratio.

5 Testable Viral Hypotheses

  • Hypothesis 1: The Split-Screen Comparison. Side-by-side visuals increase "re-watch" rates as viewers compare details. Replicate by: Comparing two different prompts or two different AI tools.
  • Hypothesis 2: High-Energy Audio Hook. Manic shouting or loud sound effects at 0:00 prevent scrolling. Replicate by: Using "extreme emotion" voiceovers in your first 2 seconds.
  • Hypothesis 3: Visual Contrast (Blue on Dark). Bright blue sparks against a dark lab create high visual "pop." Replicate by: Using complementary colors (orange/blue) and high-contrast lighting.
  • Hypothesis 4: The "Expert" Persona. Showing technical model names builds authority. Replicate by: Labeling your AI tools clearly on screen.
  • Hypothesis 5: Engagement Loop. Asking for a specific keyword in the comments. Replicate by: "Comment [KEYWORD] for the prompt."

How to Recreate: From 0 to 1

Step 1: Define Your Archetype

Choose a high-emotion character. Instead of a "man in a lab," use a "Mad Scientist on the verge of a breakthrough." This adds narrative weight to the movement and expressions.

Step 2: Generate Consistent Character Images

Use Midjourney or DALL-E 3 to create your base image. Use a prompt like: "Elderly mad scientist, wild grey hair, lab coat, red tie, intense expression, cluttered laboratory background, cinematic lighting."

Step 3: Select Your AI Video Engines

To replicate the comparison, use two different tools. For example, use Luma Dream Machine for one and Kling AI or Higgsfield for the other. This provides the "Versus" content.

Step 4: Craft the Motion Prompt

Focus on the "Eureka" moment. Your prompt should include: "Character shouting with joy, arms raised, blue electric sparks jumping between Tesla coils, flickering lights, 4k, cinematic."

Step 5: Audio Generation & Sync

Use ElevenLabs to generate a "manic/excited" voiceover. Use the script: "I did it! I finally did it! They said it was impossible!" Use a tool like Sync Labs or Hedra to ensure the character's lips match the audio perfectly.

Step 6: Split-Screen Editing

In CapCut or Premiere Pro, stack the two videos. Use a 9:16 canvas and crop each video to fill half the screen. Add clear text labels (e.g., "MODEL A" vs "MODEL B").

Step 7: Add Visual Polish

Overlay a slight film grain and color grade both clips to have a similar "cinematic" feel, even if the models differ. This makes the comparison about the *motion* and *detail* rather than just color.

Step 8: Implement the "Comment Hook"

Add a text overlay at the end: "Which one is better? Comment 'PROMPT' for the settings." This drives the engagement needed for the algorithm to push the video.

Growth Playbook: Distribution & Scaling

3 Ready-to-Use Opening Hooks

  • "Stop using [Old Tool], this new AI model just changed everything."
  • "Can you spot the difference between these two AI videos?"
  • "I tested the top 2 AI video generators so you don't have to."

4 Caption Templates

The Comparison: "Higgsfield vs Nano Banana. 🎥 One is cinematic, the other is hyper-detailed. Which one wins for you? Comment 'SOUL' and I'll send you the exact prompts I used! 🚀 #aivideo #creativetech"

The Tutorial: "How to get this 'Mad Scientist' look in 3 steps. 🧪 1. Generate in [Tool], 2. Sync in [Tool], 3. Edit in CapCut. Full prompt in my bio! ⚡️ #aiart #tutorial"

Hashtag Strategy

  • Broad: #AI #ArtificialIntelligence #DigitalArt #TechTrends
  • Mid-Tier: #AIVideo #GenerativeAI #Higgsfield #ContentCreatorTools
  • Niche: #SoulCinema #MadScientistAesthetic #AIComparison #IndieCreator

Frequently Asked Questions

What tools make it look the most similar?

Using the same base image as a "Character Reference" in tools like Higgsfield or Luma ensures the scientist looks the same.

What are the 3 most important words in the prompt?

"Cinematic lighting," "Electric discharge," and "Manic expression."

Why does the generated face look inconsistent?

This usually happens if the motion strength is too high; try lowering the "Motion Bucket" or "Creativity" slider.

How can I avoid making it look like AI?

Add real film grain overlays and ensure the lighting is "motivated" by objects in the scene (like the sparks).

Is it easier to go viral on Instagram or TikTok with this?

Instagram favors high-aesthetic "cinematic" comparisons, while TikTok favors the raw "tutorial" aspect of the comparison.

How should I properly disclose AI use?

Use the platform's built-in "AI-Generated" label and mention the tools in your caption to build trust.