Ai Brainrot Meme Maker

Make overstimulating, chaotic short-form meme videos without assembling every absurd layer by hand. This page should help users build brainrot-style clips with stacked visuals, loud pacing, TTS energy, and internet-native randomness.

Video
by.shlabu
GLOBAL LOCK: A vertical 9:16 creator tutorial reel that demonstrates a hybrid AI-image workflow by pairing one model for aesthetic direction and another for realism. The reel alternates between cinematic desert scenes, chemistry-lab inserts, oversized statement text, and a product-style interface showing model selection. The main cinematic world is a sunbaked desert trailer setting with retro Americana energy: a dusty camper trailer, dry shrubs, mountain backdrop, golden-hour or hard daylight, and two attractive young adults wearing coordinated yellow outfits. The mood should feel like a premium AI film still sequence. The tutorial point is that Midjourney builds the aesthetic, Nano Banana adds realism, and Syntx AI provides access to both under one subscription.

[00:00-00:12] Open on a crisp cinematic desert setup featuring a young woman sitting outside a weathered trailer in a bright yellow jumpsuit or matching yellow outfit. Large text overlays tease that “nobody” is telling viewers the real trick. The frame should feel polished and editorial, with dry desert mountains, a pale old camper, folding chairs, and harsh clean sunlight. Then cut to a young blond man in a similar yellow outfit turning near the same trailer, reinforcing the shared visual universe.

[00:12-00:22] Shift into a montage that visually explains the combination principle. Show chemistry-lab close-ups with gloved hands pouring colored liquids into beakers and test tubes, then layer or collage those inserts with close-up portraits of the desert characters. The point is metaphorical and structural: one ingredient contributes style, the other contributes realism. Keep the typography bold and the edits quick enough to feel like a creator revealing a secret formula.

[00:22-00:30] Reveal the operational proof. Show a dark interface with a “Model selection” dropdown open and Nano Banana highlighted, alongside MidJourney, Seedream, Sora, Flux, Runway Frames, Imagen 4, and more. This is the credibility moment: viewers can see the exact tool stack and understand that the workflow depends on selecting and combining different models inside one platform.

[00:30-00:35] Return to the finished desert footage with the woman in yellow outside the trailer, now with closing CTA text promising the link and prompts for anyone who comments. The final feeling should be that the cinematic result is not the output of one model alone, but of a deliberate pairing between style engine and realism engine.

NEGATIVE PROMPT: generic one-model output, muddy trailer park visuals, inconsistent wardrobe between characters, overprocessed skin, sterile lab images, weak desert lighting, unreadable interface, random color grading, low-detail realism, boring tutorial pacing.

SHOT PROMPTS: desert trailer cinematic scene; woman in yellow jumpsuit outside camper; man in matching yellow outfit; chemistry beaker montage; combine them together concept; model selection dropdown; Nano Banana and MidJourney workflow; Syntx AI multi-tool access; comment AI CTA.

SPEECH PACK: Spoken delivery should feel like a creator exposing a hidden system. Tone is confident, slightly conspiratorial, and conversion-focused, emphasizing “Midjourney for aesthetic,” “Nano Banana for realism,” and “comment AI.”
Video

INVARIANTS TO LOCK
- Vertical 9:16 tutorial Reel about making handmade crafting videos with AI.
- Main talking-head presenter is a young adult man in a black hoodie and backwards black cap, framed mid-shot against a dark indoor background.
- Supporting example visuals show rustic village crafting scenes in warm earthy daylight: handmade sculptures, carved figures, people working outdoors, and a surreal luxury-car-in-village juxtaposition.
- Screen recordings and phone mockups demonstrate how to source or structure the content inside an app workflow.
- Tone is “how to make viral videos” with direct platform-growth framing.

SHOTLIST
1. [00:00-00:06] Open on high-view-count handmade village visuals with bold text like HOW TO MAKE VIRAL VIDEOS, showing a green luxury sports car in a rural handcrafted setting.
2. [00:06-00:12] Presenter appears explaining the concept while phone mockups show scrolling grids of crafting clips.
3. [00:12-00:18] More examples of handmade scenes: carved statues, people working in dirt courtyards, vertical-video thumbnails with large view counts.
4. [00:18-00:26] Screen recordings display app lists, prompts, and a selected topic or generator labeled Handmade Craft.
5. [00:26-00:35] Presenter returns to explain how to open the workflow, store the idea, and turn one handcrafted niche into repeatable viral short-form content.

STYLE BIBLE
Visual style: creator-growth tutorial mixed with rustic AI-generated craft-content examples.
Camera signature: static talking-head inserts, phone UI overlays, grid thumbnails, and attention-grabbing example clips.
Lighting signature: presenter in dim neutral indoor light; supporting examples in warm outdoor village daylight.
Grade signature: social-platform contrast, bright yellow text accents, earthy browns in craft scenes, clean UI whites.
Speech style: fast, instructional, platform-native, optimized around virality and repeatability.

MASTER PROMPT
GLOBAL LOCK: Create a vertical tutorial Reel showing how to make viral handmade crafting videos with AI. Keep a young male creator in a black hoodie and backwards black cap as the talking-head guide. Intercut him with grids of short-form craft examples, phone mockups, and screen recordings. The example scenes should feature rural handmade environments with dirt courtyards, sculpted figures, artisans, huts, and high-contrast visual hooks like a green luxury sports car appearing inside a village craft scene. The whole structure should feel like a growth hack breakdown for short-form platforms.

[00:00-00:06] Open with a collage of high-performing handmade-craft-style clips in a rural village aesthetic, including a green sports car absurdly placed in the scene. Overlay bold text such as how to make viral videos.

[00:06-00:11] Cut to the presenter speaking directly to camera while a phone screen mockup beside him shows a grid of vertical craft content. He frames the opportunity as a repeatable niche, not a one-off trend.

[00:11-00:17] Show more example clips: carved statues, artisans working outdoors, and platform thumbnails with large view counts. The imagery should feel satisfying and scrollable.

[00:17-00:24] Transition into process screens: app menus, topic selection, and a prompt or tool page labeled Handmade Craft. This is the workflow proof section.

[00:24-00:35] Return to the presenter to explain how to open the niche, store the workflow, and repeat the concept across TikTok and Shorts. End with the feeling that this can be productized into a repeatable content system.

NEGATIVE PROMPT
Do not make the craft examples generic factory footage or polished luxury ads. Avoid unclear handmade action, unreadable UI, weak hook text, or lifeless presenter delivery. The contrast between rustic craft content and strategic AI workflow is the point.

SPEECH PACK
[00:00-00:12] Speaker A. Meaning: handmade craft videos can be turned into viral AI content with the right format. Delivery: direct, energetic.
TAKE_A: “Here is how to make handmade crafting videos with AI that actually go viral.”
TAKE_B: “This niche is blowing up, and the format is way easier to build than people think.”
TAKE_C: “If you want a repeatable viral niche, handmade craft content is one of the best to study.”

[00:12-00:24] Speaker A. Meaning: show examples and platform distribution across TikTok and Shorts. Delivery: tutorial pacing.
TAKE_A: “The key is packaging the visuals in the right format and pushing them across TikTok and Shorts.”
TAKE_B: “You need satisfying scenes, strong thumbnail moments, and a workflow you can repeat fast.”
TAKE_C: “This works because the visuals are simple, surprising, and easy to scale.”

[00:24-00:35] Speaker A. Meaning: the workflow can be saved and reused as a system. Delivery: practical close.
TAKE_A: “Once you set up the workflow, this becomes a repeatable content machine.”
TAKE_B: “You are not making one video, you are building a niche system.”
TAKE_C: “Use the same workflow, swap the craft scenario, and you can keep publishing.”
Video

MASTER PROMPT
GLOBAL LOCK: Vertical creator tutorial reel about making rust-cleaning videos with generative AI. A male host in a cap speaks to camera while the reel alternates between rusty object examples, cleaning transformations, tool screens, and before-after visuals. Keep the process clear and the transformation obvious.

[00:00-00:05] Open on bold text and rust-cleaning examples.
[00:05-00:12] Show the host plus rusty object references and early before-after visuals.
[00:12-00:20] Move through workflow pages and tool screens.
[00:20-00:30] Show more transformation outputs from rusted to clean metal.
[00:30-00:44] End with recap examples and workflow close.

NEGATIVE PROMPT
Avoid muddy textures, weak before-after contrast, unreadable UI, inconsistent object geometry, and robotic host delivery.

SPEECH PACK
Open by framing the topic as how to make rust-cleaning videos. Walk through references, tools, prompts, and transformations. Close by reinforcing the workflow and creator use case.
Video
GLOBAL LOCK: preserve a creator-led talking-head tutorial format mixed with vertical phone screen recordings. Keep one young male creator in a backward black cap and dark hoodie speaking directly to camera in a studio setup with a microphone. Intercut iPhone-style screen captures showing ChatGPT/OpenAI image workflow steps, uploaded object photos, prompt entry, and AI video generation screens. Maintain a practical “make from your phone” educational reel structure. No random B-roll, no unrelated tools, no logo overlays beyond app UI already present in the source.

Create a 37.8-second social-first AI tutorial reel showing how to turn ordinary phone photos into animated AI character videos. Begin with a hook using a simple hand-held object photo and bold on-screen teaching posture from the creator. Then show phone interfaces: photo selection, ChatGPT or image-tool screens, prompt entry, image transformation results, switching to an AI video tool, uploading the generated image, entering a motion prompt, and generating the final animated output. Use repeated face-cam segments where the creator explains the steps and emphasizes that the workflow can be done from a phone.

Include the specific examples visible in the source: tiny object/food photos held in a hand, ChatGPT app icon and mobile interface, typed prompts that turn objects into cute expressive characters, a generated pear-like baby character image, a switch to another AI generation interface, upload and prompt steps for video, and a final generated moving result shown on-screen. Preserve the educational pacing and creator-marketing vibe.

SHOT SEGMENTS:
[00:00-00:06] Hook with object photos in hand and creator talking-head intro about making AI content from your phone.
[00:06-00:14] Mobile screens show ChatGPT / image workflow setup, app screens, and prompt entry.
[00:14-00:22] Creator explains the key steps while on-screen phone UI shows prompt refinement and generated object-to-character image outputs.
[00:22-00:30] The tutorial switches to an AI video tool, showing upload, prompt, and generation steps from the phone.
[00:30-00:37.8] Final result displays the generated animated character clip, while the creator closes with a call to try the workflow.

ENVIRONMENT: creator desk/studio face-cam plus crisp mobile screen recordings. CAMERA: direct-to-camera presenter shots alternating with full-screen phone UI captures. LIGHTING: clean creator-studio lighting on face-cam; bright legible phone UI on inserts. MOTION: tutorial pacing, finger taps on phone UI, creator emphasis gestures, no cinematic narrative scenes.

NEGATIVE PROMPT: generic AI ad montage, unrelated tools, desktop-only workflow, no phone UI, missing creator face-cam, subtitles replacing the actual visible UI, blurry screens, watermark, logo overlays.

SPEECH PACK: creator-to-camera tutorial speech implied, but do not transcribe captions here.
Video

GLOBAL LOCK: A short-form tutorial reel hosted by a young light-skinned male creator in his early 20s with a slim build, short dark hair mostly hidden under a backwards black baseball cap, dark eyebrows, clean-shaven face, and a direct confident delivery style. He wears a black hoodie in a dark studio with magenta and blue edge lighting on his face and shoulders. Across the whole video, keep the creator visually consistent whenever he appears on camera. Alternate between direct-to-camera talking-head shots and desktop/screen-recording style inserts that show app interfaces, prompt builders, editing panels, and generated example outputs. The overall structure must feel like a practical creator education reel teaching how to make viral AI videos with ChatGPT, GPTs, Kling, and an editor workflow. Use social-video pacing, clear cut points, large readable interface elements, bold keyword captions, and crisp screen captures. Speech style is one energetic male speaker only, close-mic, dry room, high intelligibility, punchy cadence, creator-educator tone, with cuts landing on emphasized words.

[00:00–00:03] A hyper-stylized example montage opens the reel before the tutorial explanation begins. Show quick AI-generated insert shots: a yellow/orange plush-like character or pastry-like creature in a tiny kitchen set, exaggerated close framing, warm domestic lighting, toy-scale props, and a glossy social-media-ready finish. Add motion that feels like a viral AI clip rather than a static still: tiny hand gestures, object movement, short action beats, and a polished ad-like grade. Include large social-post overlays such as view counts or bold engagement graphics to imply virality. No host visible yet. No spoken words clearly visible on lips here if needed, or let the first line begin under the montage as voice-over. Audio should already feel like a tutorial hook.

[00:03–00:07] Cut hard to a centered talking-head medium close-up of the creator in the dark studio. The host looks straight into the lens and says the equivalent of “How to make viral videos,” with lips fully visible and sync strictness high. Frame him chest-up, camera at eye level, 35mm-to-50mm lens feel, shallow background, magenta-blue neon edge lights behind him. His expression is serious and helpful, with fast, clear articulation. The cut should feel like a strong tutorial promise after the flashy hook.

[00:07–00:12] Intercut between the host and the first example set. Show a vertical phone-style AI video example of a red cartoonish squishy character in a fleshy or surreal macro environment, then cut to generated household-object characters in a kitchen or interior setting, each with visible view-count overlays. Keep the host narration continuing over these inserts, explaining that viewers are asking how these kinds of videos are made. The examples should feel deliberately absurd, highly clickable, and visually varied. Maintain a social-app UI vibe on the inserts.

[00:12–00:16] Return to the host in the same neon studio framing. He explains that the process is easy or straightforward. Use a steady locked-off shot, close mic, no visible background clutter, and keep the delivery conversational but authoritative. Cut precisely on his emphasized keywords.

[00:16–00:21] Switch to screen-recording style visuals that show a desktop or browser workflow. Display recognizable AI tooling logos and interface tiles associated with ChatGPT, GPTs, custom tools, or image/video generation platforms. Cursor movement should be deliberate and readable. Then cut back briefly to the host as he explains the first step: going to GPTs or opening a custom GPT workflow. The speech remains one speaker, with no ambient distractions.

[00:21–00:27] Show actual interface navigation in a clean dark-themed desktop UI: menus, lists of GPTs, and prompt or tool panels. Include cursor clicks on fields and dropdowns. Briefly show a text or voice-input area and then a more advanced editing or story-generation screen. The host explains the setup step by step, describing where to go and what to choose. Keep the visuals aligned to the speech so every mention lands on the corresponding interface action.

[00:27–00:33] Continue in the software workflow with a tighter focus on prompt construction and asset preparation. Show text fields being filled, aspect-ratio settings such as 9:16, character/object references, and a “create story” or similar composition interface. Then reveal generated outputs: a stern milk-carton-like object character, a toast or bread-like character, and a colorful gadget character in a neon environment. The host explains that he is generating characters or story assets that can later be animated.

[00:33–00:39] Stay in screen-recording mode and move into the video-generation stage. Show the generated stills or character renders inside a platform interface, then a workflow where files are exported, selected, or prepared for upload into Kling AI or a comparable video generator. Interface panels should show thumbnails, upload areas, and generation controls. The host explicitly mentions Kling AI and a version number or model family, with cut-sync on the product name for emphasis.

[00:39–00:45] Demonstrate the final generation pipeline. Show the cursor uploading still images, selecting outputs, and previewing the finished short clips. Then display finished AI video shots of the angry milk-carton character and the colorful electronic character moving on their own in polished short scenes. The creator’s voice makes the pitch clear: upload the assets, run the generation, and turn them into videos like these. Keep the examples vivid and cute rather than realistic.

[00:45–00:48] End on the host back in the neon studio, now holding up a phone or printed visual reference while delivering the call to action. He tells viewers to comment for the prompt or follow for more. The shot is front-facing, centered, and slightly more animated than earlier, with confident hand motion and a creator CTA tone. Keep lips fully visible, close-mic audio dry and crisp, and land the final words right before the cut ends.

NEGATIVE PROMPT: inconsistent host identity, changing facial structure, different hats or wardrobe across talking-head shots, muddy UI text, unreadable screen captures, fake software logos replacing interface clarity, random extra speakers, robotic voice cadence, monotone narration, slurred words, lip-sync mismatch, soft unfocused screen recordings, flickering cursor, temporal jitter, duplicate objects in generated examples, malformed household characters, broken anatomy on host hands, blown-out neon highlights, crushed shadows hiding the face, excessive motion blur, abrupt camera zooms not present in the reference, noisy room echo, harsh sibilance, clipping, over-compressed dialogue, floating captions unrelated to speech, unrelated cutaway footage, low-resolution app panels, and generic “AI tutorial” visuals that ignore the specific ChatGPT-to-Kling workflow.

SHOT PROMPTS:
SHOT_01_HOOK: Viral AI example montage, tiny surreal kitchen set, pastry-like mascot, glossy toy-scale realism, warm light, social overlay metrics, ultra-clickable short-form hook.
SHOT_02_HOST_INTRO: Young male creator in backward black cap and black hoodie, neon magenta-blue studio, medium close-up, direct eye contact, says how to make viral videos, crisp close-mic tutorial delivery.
SHOT_03_EXAMPLES: Vertical examples of bizarre AI characters with high view overlays, red squishy mascot, household-object characters, meme-ready absurdity.
SHOT_04_GPTS_SETUP: Desktop UI with ChatGPT and GPT listings, cursor selecting custom GPT workflow, host explaining first setup step.
SHOT_05_PROMPT_BUILD: Dark-mode interface, text prompts, asset setup, aspect-ratio controls, create-story panel, generated character images appearing.
SHOT_06_KLING_STAGE: Exported character stills uploaded into Kling AI style interface, generation controls, preview windows, finished animated clips.
SHOT_07_CTA: Host returns to studio, holds visual reference, asks viewers to comment and follow, assertive creator-education ending.

SPEECH PACK
[00:00–00:03]
Closest audible transcript: "People keep asking how I make these viral AI videos."
Safe paraphrase: "A lot of people keep asking how these viral AI videos are made."
TAKE_A: [confident hook] People keep asking... how I make these viral AI videos.
TAKE_B: [fast, punchy] People keep asking how I make these viral AI videos.
TAKE_C: [teacherly emphasis] A lot of people keep asking how these viral AI videos are made.
Speaker: A
Lips visible: none or partial under montage
Lip-sync strictness: low
Mic-room signature: close mic, dry, clean, present

[00:03–00:07]
Closest audible transcript: "How to make viral videos."
Safe paraphrase: "Here is how to make viral AI videos."
TAKE_A: [direct] How to make viral videos.
TAKE_B: [slightly slower] Here's how to make viral AI videos.
TAKE_C: [emphasis on viral] How to make VIRAL videos.
Speaker: A
Lips visible: full
Lip-sync strictness: high
Cut sync: strong cut lands on "How"

[00:07–00:12]
Closest audible transcript: "A lot of you were asking me how these videos are made."
Safe paraphrase: "A lot of you asked how these kinds of videos get made."
TAKE_A: [friendly] A lot of you were asking me how these videos are made.
TAKE_B: [faster] A lot of you asked how these kinds of videos get made.
TAKE_C: [storytelling] So a lot of you have been asking... how these videos are actually made.
Speaker: A
Lips visible: mixed
Lip-sync strictness: medium

[00:12–00:16]
Closest audible transcript: "It's actually really easy."
Safe paraphrase: "It's way easier than people think."
TAKE_A: [reassuring] It's actually really easy.
TAKE_B: [casual] It's way easier than people think.
TAKE_C: [emphasis] This is actually super easy.
Speaker: A
Lips visible: full
Lip-sync strictness: high

[00:16–00:21]
Closest audible transcript: "Go to GPTs..."
Safe paraphrase: "First, open GPTs and start there."
TAKE_A: [instructional] Go to GPTs.
TAKE_B: [calm tutorial] First, open GPTs and start there.
TAKE_C: [step-by-step] Step one: go into GPTs.
Speaker: A
Lips visible: mixed
Lip-sync strictness: medium

[00:21–00:27]
Closest audible transcript: "Use any example..."
Safe paraphrase: "Use any example or template that fits the kind of video you want."
TAKE_A: [guide tone] Use any example that fits what you want to make.
TAKE_B: [clear] Use a template or example that matches the type of video you want.
TAKE_C: [slightly faster] Pick any example that lines up with the kind of video you're trying to make.
Speaker: A
Lips visible: partial
Lip-sync strictness: medium

[00:27–00:33]
Closest audible transcript: "Create... paste the... into..."
Safe paraphrase: "Create the assets, paste the prompt in, and set the format you want."
TAKE_A: [procedural] Create the assets, paste the prompt in, and set the format you want.
TAKE_B: [step-by-step] Build the assets, paste everything in, then choose your format.
TAKE_C: [faster tutorial cadence] Create it, paste the prompt, and set it up the way you need.
Speaker: A
Lips visible: mixed
Lip-sync strictness: medium

[00:33–00:39]
Closest audible transcript: "Go like Kling AI 2.6..."
Safe paraphrase: "Then take it into Kling AI and generate the motion from there."
TAKE_A: [brand emphasis] Then take it into Kling AI and generate the motion from there.
TAKE_B: [short] Next, use Kling AI for the video part.
TAKE_C: [tutorial tone] After that, bring the assets into Kling AI and run the generation.
Speaker: A
Lips visible: mixed
Lip-sync strictness: medium
Cut sync: emphasize "Kling AI"

[00:39–00:45]
Closest audible transcript: "Upload... and make videos like this."
Safe paraphrase: "Upload your images and turn them into videos like these."
TAKE_A: [instructional] Upload your images and turn them into videos like these.
TAKE_B: [punchy] Upload them... and make videos like this.
TAKE_C: [encouraging] Just upload the assets and you'll get videos like these.
Speaker: A
Lips visible: mixed
Lip-sync strictness: medium

[00:45–00:48]
Closest audible transcript: "Comment... follow..."
Safe paraphrase: "Comment if you want the prompt, and follow for more."
TAKE_A: [creator CTA] Comment if you want the prompt, and follow for more.
TAKE_B: [fast CTA] Comment for the prompt and follow for more.
TAKE_C: [friendly close] Drop a comment if you want it, and follow for more videos.
Speaker: A
Lips visible: full
Lip-sync strictness: high
Video
GLOBAL LOCK: A vertical 9:16 creator tutorial reel teaching how to make first-person time-travel vlogs with AI. The lower half of the video holds a young male creator speaking directly to camera in a dark studio with red side lighting, black hoodie or jacket, and a backward cap. The upper half alternates between social-proof examples, smartphone search screens, browser pages, prompt-writing documents, and final generated historical selfie videos. The core output style is a realistic vlog shot where a modern creator appears to be filming himself inside major historical moments such as Viking England, the Wild West, or D-Day. The entire reel should feel practical and system-driven, built for viewers who want repeatable viral history content.

[00:00-00:12] Open on two successful example clips above the speaker: one where a young woman appears to selfie-vlog among Vikings in England in 865 AD, and another where she appears in a Wild West town in 1880. Both examples should look like genuine first-person historical vlogs with modern camera behavior but era-correct surroundings. View counts or social-proof markers should be visible to show that this content format already works.

[00:12-00:28] Move into the workflow entry step through a smartphone UI. Show a phone search screen with “Time Travel” typed in, then a Google-like result page for “Higgsfield AI.” The creator below explains the process in clear terms, making the tutorial feel accessible. The emphasis is on how surprisingly simple the setup is once the right tools are known.

[00:28-00:46] Show prompt-building and script-generation stages. Display a prompt document or text page labeled for text-to-video prompts, with entries for historical scenarios like landing craft before a beach assault or other era-specific vlog scripts. The interface should feel like a practical creator workflow rather than a polished marketing demo. The point is that the output begins with scripting the right first-person historical situation.

[00:46-01:01] End on a dramatic finished example where the creator appears to be selfie-vlogging during a World War II beach landing, with smoke, soldiers, landing craft, and battlefield chaos behind him. Overlay a small thumbnail or packaging element suggesting how the final video can be turned into a clickable social or YouTube asset. The result should feel both absurd and convincing: modern vlog behavior dropped into a massive historical event.

NEGATIVE PROMPT: static history painting look, third-person documentary framing, no selfie perspective, bland phone UI, generic prompts, inconsistent main character face, casual modern backgrounds, low-detail crowds, weak historical setting, no social-proof packaging.

SHOT PROMPTS: Viking time-travel selfie vlog; Wild West selfie vlog; phone search Time Travel; Higgsfield AI search result; ChatGPT prompt document; text-to-video historical script; D-Day beach selfie vlog; viral history series tutorial.

SPEECH PACK: One male speaker only. Tone is practical and energetic, emphasizing simplicity, virality, and repeatability. Stress “time travel vlogs,” “Higgsfield AI,” “ChatGPT prompts,” and the historical selfie angle.
Video

MASTER PROMPT
GLOBAL LOCK: Vertical tutorial reel about making explainer videos with generative AI. Use a host in a cap, cartoon examples, script pages, tool dashboards, character sheets, and output previews. Keep the pace educational and process-driven.

[00:00-00:05] Open on text and visual examples that frame the topic as how to make explained videos.
[00:05-00:12] Show the host, profile-style context, and interface screens.
[00:12-00:20] Move through scripts, planning docs, and design boards.
[00:20-00:28] Show character sheets, tool dashboards, and generated assets.
[00:28-00:36] End on output recap and workflow close.

NEGATIVE PROMPT
Avoid unreadable UI, inconsistent characters, weak text legibility, random workflow jumps, and robotic host delivery.

SPEECH PACK
Open by framing the topic as how to make explained videos. Walk through the workflow from scripts and references to characters, tools, and output. Close by reinforcing that the process is repeatable for creators and brands.
Video
GLOBAL LOCK: A vertical 9:16 tutorial Reel, approximately 55 seconds, teaching creators how to make first-person “time travel vlog” videos with AI. The format alternates between three visual layers: (1) viral sample clips styled as selfie vlogs recorded inside different historical eras, with a modern creator holding the camera at arm’s length and speaking to viewers while standing inside convincing period environments such as ancient Pompeii, plague-era London, or ancient Egypt; (2) a talking-head male host in a black cap and dark jacket/hoodie, centered against a dark background with red-magenta accent lighting and studio microphone visible; and (3) screen recordings of research prompts, AI tool menus, model selection screens, and generation dashboards showing how the workflow is assembled. The historical vlog examples should feel UGC and immediate: casual arm-extended framing, reactive facial expression, period background detail, and humorous or surprising captions like “I tried to warn the people of Pompeii” or “I visited London during the Black Death.” The tutorial tone is direct, tactical, and creator-friendly.

[00:00-00:05] Open with two or more viral time-travel vlog examples stacked above the host. Show a woman filming herself in ancient Pompeii and another person filming themselves in plague-era London, each with caption-style text embedded in the sample. The host below introduces the concept with bold text like TIMETRAVEL VLOG and immediately frames it as a repeatable AI content format.

[00:05-00:12] Continue with more sample cards, including an ancient Egypt selfie-vlog shot near the pyramids with humorous “POV: I time-traveled…” captioning. Keep the host visible below, speaking quickly while the audience sees the end result before the process.

[00:12-00:18] Transition to research/prompt structure. Show a white text document or GPT-style planning screen listing inputs such as historical event, era, location, future scenario, or fictional world. The text promises outputs like cinematic text-to-image prompts, text-to-video prompts, spoken vlog dialogue, and background action. The host explains that you first decide the time/place/event.

[00:18-00:24] Show additional workflow pages or prompt-planning screens that suggest using a custom GPT or research agent to generate the historical setup, dialogue, and shot instructions. The host remains steady, centered, and instructional, while the UI reinforces that the process is systematic.

[00:24-00:31] Move into the image-generation stage. Show a dark creative workspace with model selection (for example Seedream 5.0 Lite or other image tools), “Create Image” style tabs, visual reference upload zones, and prompt boxes. The host explains that you create still images first before moving to video.

[00:31-00:38] Cut through tool chain screens implying additional steps: OpenArt or similar image creation, ElevenLabs for voice-over, and CapCut or editing steps. The important point is that the final time-travel vlog is modular: script, image, animation, voice, edit. Keep the visuals practical rather than abstract.

[00:38-00:46] Show video-generation dashboards where historical selfie frames are converted into short clips. Example thumbnails display the host inside sandy excavation scenes or period streets, with output duration settings visible. The host explains that you use short durations and build several clips for one vlog.

[00:46-00:55] End by returning to the strongest time-travel examples while the host summarizes the workflow. The final feeling should be that anyone can choose an era, generate a selfie-style historical perspective, add voiceover, and turn it into a serialized creator format.

NEGATIVE PROMPT: avoid polished cinematic third-person shots, avoid generic documentary history footage, avoid inconsistent protagonist identity between clips, avoid modern background objects leaking into historical scenes, avoid unreadable UI panels, avoid lifeless history tableaux, avoid missing selfie-arm framing, avoid flat educational tone, avoid overlong text blocks on screen, and avoid making the workflow feel more complex than the creator can reproduce.
Video
by.shlabu

GLOBAL LOCK: Horizontal creator-demo video set in a minimalist white studio built around a glossy retro-futurist red terminal or kiosk branded as an AI creation device. The cast includes a young blonde man with curly hair and casual-cool styling, plus a brunette woman in a black camisole or simple fitted top. The red terminal has a built-in screen that first shows a crude stick-figure face, then transitions into a modern AI interface associated with Hedra Agent. The style blends real-life creator demo energy with clean commercial staging: white cyclorama backdrop, bold red hardware centerpiece, yellow subtitle captions, and fast transitions into generated outputs. The core promise is that casual natural-language requests can be turned into structured prompts, AI tool recommendations, and finished visuals.

[00:00-00:08] Open on a cinematic shot of the blonde man sitting in or beside a vintage car with bold yellow subtitle text. The mood feels like a lifestyle ad or stylized short film. The brunette woman appears in adjacent car shots, creating the impression of a polished generated scene.

[00:08-00:14] A pink title card or interstitial appears, then the video cuts into the white studio setup with the retro red terminal. The brunette woman stands beside it while the blonde man faces the screen. Yellow subtitle captions carry the spoken explanation.

[00:14-00:22] The terminal screen shows a simple stick figure, then switches to a Hedra-like interface asking what should be made today. This establishes the joke and the product capability at the same time: conversational input becomes creative output.

[00:22-00:32] Show the interface more clearly. A prompt field, asset options, and example thumbnails appear as the system loads. The presenter explains that the agent can understand casual requests, structure prompts, and route them toward the right generation tools and settings.

[00:32-00:42] Cut to the visual payoff: multiple styled versions of the same man appear side by side in different looks and outfits, demonstrating reference control and character transformation. The clean white background keeps attention on the generated variations and the tool logic above them.

[00:42-00:54] End with more polished studio shots of the brunette woman beside the red terminal while the narration frames Hedra Agent as an easier way to generate strong AI visuals. The overall tone should feel like a product demo wrapped in a playful, high-concept studio vignette.
Video
GLOBAL LOCK:
- Format: vertical 9:16 short-form tutorial reel, creator-education pacing, black background UI inserts, high contrast social video polish.
- Keep one consistent male creator for all talking-head shots: young adult male, light skin, black backwards baseball cap, black hoodie/jacket, seated at desk, direct-to-camera framing, confident tutorial delivery.
- Keep one consistent demo subject inside the generated example image/video: a plush panda lying on a worn circular rug in a dim rustic room with warm overhead spotlight, scattered objects around the floor, soft moody shadows.
- No character drift, no costume drift, no sudden age changes, no extra presenters, no unrelated cutaways.

SHOT TIMELINE:

[00:00-00:03]
Talking-head intro. Creator sits centered against dark background and speaks straight to camera with energetic tutorial tone. Large editorial text overlays summarize the hook: make cinematic scenes from your phone. Insert fast teaser flashes of social posts showing the panda image/video result and yellow headline blocks.

[00:03-00:06]
Phone close-up UI. Vertical smartphone screen fills frame. A circularly framed panda image appears inside a social-style composition. Overlaid kinetic words emphasize the concept of turning a phone photo into a scene. Screen recording aesthetic should remain crisp and legible.

[00:06-00:09]
Back to talking head. Creator gestures lightly while saying the workflow starts by opening the app. Tight chest-up framing, direct eye contact, subtle head movement, clean synced speech.

[00:09-00:12]
Phone settings interface. User taps through app menu and settings-like pages to reach AI generation tools. Interface is dark mode, minimal, modern, with distinct list items and icons.

[00:12-00:16]
Prompt-building section on phone. Search field, model selection, and text-entry screens appear. User searches for GPT/prompt helper style tools, selects options, and opens a text area. On-screen rhythm should clearly communicate “build the prompt first.”

[00:16-00:20]
Text drafting flow on phone. Long paragraph prompt appears in a dark text box. User chooses/copies prompt text, then taps through action buttons. Highlight the exact motions: choose, copy, click, and go. The UI should feel like a real mobile workflow, not abstract fake panels.

[00:20-00:24]
Model/generation interface. User pastes the prompt into an AI image/video generation tool, selects the correct model or preset, and taps generate. Show dark-mode tool UI with image prompt area, buttons, and tabs.

[00:24-00:28]
Example asset preview returns. The panda scene appears again as a generated image/video preview. The phone screen cycles from prompt entry to generated result. Add supporting overlay words that reinforce the logic of generating the scene from a single photo.

[00:28-00:32]
Phone-to-output transition. The generated panda shot becomes larger and more immersive, as if stepping out of the interface into the final cinematic frame. Keep the panda, rug, spotlight, and room layout consistent with the reference image.

[00:32-00:35]
Talking-head recap. Creator returns on camera and explains the final step or CTA. He maintains same wardrobe and setup, speaking with persuasive, practical creator-teacher energy.

[00:35-00:39]
Final CTA and social proof. Talking-head remains center frame while comment-style overlays and platform UI elements appear below, suggesting engagement and repeatability. End on a clean, punchy tutorial finish.

VISUAL STYLE:
- Social tutorial reel, fast but readable editing.
- Mix talking-head shots with direct phone-screen recordings.
- Dark UI, white text, occasional high-contrast yellow hook text.
- Clean mobile creator aesthetic with authentic app interaction.

CAMERA AND EDITING:
- Talking-head: locked tripod or subtle digital push-in.
- Phone segments: full-screen mobile capture with smooth taps and transitions.
- Fast snap cuts between explanation, interface, and result.
- Keep chronological clarity so the viewer can follow the workflow in order.

SPEECH PACK:
- Spoken language: English.
- Creator voice: young male creator educator, confident, concise, practical, slightly hyped but not cheesy.
- Delivery style: short tutorial phrases, clear CTA emphasis, social-video pacing.
- Lip sync must stay natural and tightly aligned during talking-head shots.

NEGATIVE PROMPT:
- No extra hands floating over the phone.
- No unreadable UI gibberish replacing app text.
- No switching creator identity between talking-head shots.
- No panda changing species, color, pose logic, or room layout between preview and final output.
- No random additional animals or fantasy objects appearing in the room.
- No horizontal framing, no cinematic letterboxing, no documentary cutaways.
- No blurred phone screens, broken typography, or unusable interface text.
Video
GLOBAL LOCK: vertical 9:16 AI creator tutorial reel, one consistent young adult male host with light skin, slim build, black backwards cap, black hoodie, seated at a desk with a black microphone and red accent light, dark studio backdrop with magenta-blue edge lighting, intercut with mobile screenshots, GPT interfaces, and AI video-generation UIs, plus a consistent translucent skeleton character wearing a red-and-white Santa hat in domestic interior scenes, crisp subtitle words, fast educational pacing, dry close-mic narration, same male voice throughout.

[00:00-00:04] Open with a fast collage hook showing viral skeleton-video examples and bold poster text about making viral videos. Include short clips of a translucent skeleton character in indoor lifestyle scenes, one wide-eyed close-up, and a host talking-head frame, establishing that this is a practical creator tutorial with proof of outcome.

[00:04-00:08] Show more examples of skeleton-content performance and creator-economy proof, including mobile screenshots of account pages, reward or payout dashboards, and social posts. The host explains that the niche is working and transitions into how to build it, speaking quickly and directly.

[00:08-00:14] Cut to the host full-screen in the studio, then to ChatGPT or GPT search interfaces on mobile or desktop. The tutorial explains that you should start with GPTs or prompt helpers and build the workflow step by step rather than improvising. Subtitle words emphasize functional phrases like ChatGPT, GPTs, and prompts.

[00:14-00:20] Show the prompt-building phase in detail. White UI screens display prompt fields and typed instructions, implying a structured set of six prompts or a multi-part setup. The host stays calm and procedural, telling the viewer exactly what to type and how to format it.

[00:20-00:28] Transition into AI creation tools. Show a dark interface with image or video generation modules, side panels, and tool tabs. The user selects image creation first, then navigates toward Kling or a similar generator. The host explains the order of operations: create stills first, then move into motion.

[00:28-00:36] Demonstrate the reference-image workflow. The interface shows image guidance or reference upload controls, resolution settings, and vertical aspect-ratio choices like 9:16 and 2K. The host narrates the exact setup sequence and stresses consistency across outputs.

[00:36-00:44] Show the generated skeleton images in a clean preview grid or card layout: a translucent skeleton wearing a Santa hat in a home interior, including a supplement-container scene. The visuals are slightly absurd but commercially legible. The host explains that these outputs can now be animated into finished clips.

[00:44-00:51] Finish by showing the final skeleton outputs again, including the supplement-jar image and another upright kitchen or room scene, while the host closes with a practical CTA. The last moments should feel like a teachable end-to-end workflow summary for making viral skeleton videos rather than just a meme montage.

NEGATIVE PROMPT: deformed skeleton anatomy, unreadable UI text, broken hands, distorted supplement jar label, inconsistent Santa hat, flickering generated images, low-resolution previews, fake app layouts, duplicate host, robotic narration, harsh sibilance, clipping, lip-sync mismatch, random watermarks, noisy backgrounds.

SPEECH PACK:
- Hook: Here’s how people are making viral skeleton videos and how you can do it too.
- Beat 1: Start with GPTs and use a structured prompt workflow instead of guessing.
- Beat 2: Write the full prompt set first, then generate your still images before moving into video.
- Beat 3: Add a reference image, set the frame to 9:16, pick a clean resolution, and run the sequence in Kling.
- CTA: If you want the workflow, save this and copy the setup.
Video
GLOBAL LOCK: vertical 9:16 static poster-style social promo, bold high-contrast creator-marketing layout, black background with bright yellow headline bars, two example phone-screen mockups centered in the composition, one showing a translucent human skeleton figure standing indoors and one showing the same skeleton in a domestic scene holding cookware, glossy thumbnail polish, crisp readable typography, tutorial-ad aesthetic, no camera shake, no extra elements, no watermark.

[00:00-00:02] Open on the full poster layout with a large all-caps headline reading how to make viral skeleton shorts. Two phone-style panels dominate the center: the left panel shows a translucent skeleton-like figure in a softly lit home interior, and the right panel shows a skeleton character in a more playful domestic pose, creating an immediate “viral AI content formula” feel.

[00:02-00:03] Hold the layout with a slight digital push-in so the example panels become more legible. Preserve the bright yellow headline bar, the black poster background, and the swipe-for-the-full-guide messaging at the bottom. The overall frame should still read like a reel cover or short-form promo graphic.

[00:03-00:05] Finish on the same static promo composition, optimized for mobile viewing and creator education. Keep the two skeleton examples clear, the tutorial promise dominant, and the bottom CTA visible so the final frame looks like a conversion-focused guide advertisement for AI short-form content creators.

NEGATIVE PROMPT: unreadable text, broken skeleton anatomy, extra limbs, warped phone frames, low-resolution poster, muddy contrast, duplicate panels, generic stock layout, flicker, watermark, distorted cookware, text artifacts, messy background clutter, weak CTA.

SPEECH PACK:
- Hook: Here’s how to make viral skeleton shorts like these.
- Beat 1: The format works because the character is instantly recognizable and the scenes are simple.
- Beat 2: Use a strong repeatable prompt structure and clear domestic actions.
- CTA: Swipe for the full guide.
Video

A) MISE EN PLACE

Reference summary
- Duration: 00:57.79
- Format: vertical 9:16, 720x1280, 24 fps
- Structure: talking-head tutorial reel demonstrating HeyGen AI Agent for UGC-style content creation
- Audio: direct-to-camera creator narration; exact words inferred best-effort from caption, visible UI, and pacing

Scene / shot segmentation
1. 00:00.00-00:10.00
   Hook section with phone-shot UGC example footage on screen, presenter lower center. A female creator-style vertical clip is shown as the practical target output while the host frames the feature as a new way to make UGC content.
2. 00:10.00-00:22.00
   More UGC examples and social-style before/after proof, including a hand pointing at the screen to emphasize generated results and mobile-native output.
3. 00:22.00-00:38.00
   HeyGen product interface section. Dark dashboard and setup screens take over, showing AI Agent-related controls, workflow panels, and configuration blocks while presenter keeps explaining.
4. 00:38.00-00:49.00
   Deeper editor / media management section. Grid-based asset views and back-office screens appear, suggesting avatar, scene, or media orchestration.
5. 00:49.00-00:57.79
   Presenter-forward close with strong CTA energy, likely asking viewers to comment “AI” for the link.

Visual evidence keyframes
- 00:00.00: UGC-style female selfie/creator shot framed on a phone screen, presenter lower center
- 00:08.00: finger pointing at screen, emphasizing mobile-native proof
- 00:16.00: second UGC-style clip with presenter continuing explanation
- 00:24.00: dark HeyGen interface with AI Agent-style workflow card and controls
- 00:32.00: dashboard-like panels and configuration widgets
- 00:40.00: media grid / project management view
- 00:52.00: presenter larger in frame with CTA close energy

Speech evidence (best-effort)
- speaker_count: 1
- speaker A: male-presenting creator speaking on-camera throughout
- speech style: upbeat tutorial narration, positioning the new HeyGen AI Agent feature as a way to produce UGC-style ad/social content
- likely content themes in order:
  1) how to create UGC-style content using HeyGen’s new AI Agent feature
  2) quick proof that the format works for social-style output
  3) walkthrough of the HeyGen setup / dashboard / workflow
  4) explanation of how the tool helps generate content faster
  5) comment “AI” for the link
- lip visibility: full for most presenter segments
- lip_sync_strictness: medium

Invariants list (LOCK THESE)
- presenter identity: male creator in casual cap, beard, light t-shirt, speaking directly to camera from a seated setup
- layout: presenter near bottom center while examples and interface screens rotate above and behind him
- product context: HeyGen AI Agent, UGC-style content creation, social media / ad creative workflow
- design language: creator tutorial, mobile-first, dark dashboard UI, concrete examples before tool explanation
- motion grammar: hard cuts between example clips and dashboard screens, no elaborate cinematic camera move
- lighting / grade: presenter evenly lit, warm-neutral skin tones, dark interface background, bright phone-screen examples
- audio style: concise, creator-education voice optimized for shorts/reels

Variables list (TWEAK THESE)
- exact UGC example faces and scenes
- exact dashboard panels and wording on HeyGen screens
- precise narration phrasing
- exact CTA wording beyond the comment-for-link mechanic

B) SHOTLIST

Shot 1
- shot_id: 1
- timecode_start: 00:00.00
- timecode_end: 00:10.00
- duration: 10.00s
- framing: presenter lower center beneath a large mobile-video example
- lens: presenter webcam/phone-style medium crop
- camera movement: static presenter crop, brisk background swaps
- subject: presenter introduces the HeyGen AI Agent use case for UGC content
- environment: female selfie-style UGC clip filling the upper frame, social-media-native layout
- speech/audio: Speaker A hook line about creating UGC-style content using the new feature

Shot 2
- shot_id: 2
- timecode_start: 00:10.00
- timecode_end: 00:22.00
- duration: 12.00s
- framing: more UGC proof clips and touch/point emphasis on screen
- camera movement: quick cuts and proof refreshes
- subject: presenter reinforces that the output looks like social-native creator content
- environment: phone-screen examples, finger pointing, comparative proof frames
- speech/audio: Speaker A highlights the outcome and use case

Shot 3
- shot_id: 3
- timecode_start: 00:22.00
- timecode_end: 00:38.00
- duration: 16.00s
- framing: HeyGen dashboard fills most of the frame, presenter remains lower center
- camera movement: rapid UI cuts
- subject: presenter explains AI Agent setup / workflow
- environment: dark product interface, cards, toggles, and pipeline sections
- speech/audio: Speaker A turns practical and tool-specific

Shot 4
- shot_id: 4
- timecode_start: 00:38.00
- timecode_end: 00:49.00
- duration: 11.00s
- framing: deeper project/media management screens
- camera movement: hard cuts through interface states
- subject: presenter explains scaling or organizing content generation
- environment: asset grid, project thumbnails, management view
- speech/audio: Speaker A continues the workflow explanation

Shot 5
- shot_id: 5
- timecode_start: 00:49.00
- timecode_end: 00:57.79
- duration: 8.79s
- framing: presenter-forward close with remaining dashboard context behind him
- camera movement: mostly static close
- subject: presenter lands the CTA and link offer
- environment: dark interface or blurred dashboard backdrop
- speech/audio: Speaker A asks viewers to comment “AI” for the link

C) STYLE BIBLE (GLOBAL)

- visual_style: AI creator tutorial reel, UGC marketing workflow breakdown
- camera_signature: persistent talking-head lower-third with changing proof and interface backgrounds
- lighting_signature: soft creator lighting on presenter; bright mobile examples contrasted with dark software UI
- grade_signature: warm-neutral presenter, darker dashboard, high-contrast phone-screen inserts
- texture_signature: crisp app interface, handheld/phone-look proof clips, creator desk setup feel
- pacing_signature: quick promise, quick proof, practical workflow, CTA
- speech_style: direct-to-camera tutorial narration
- speaker_profile: enthusiastic, practical, creator-marketer tone
- pronunciation_profile: casual English, medium-fast, emphasis on tool name and outcome
- mic_mix_profile: dry, clear creator audio with light compression

D) PROMPT SYNTHESIS

MASTER PROMPT

GLOBAL LOCK: Create a vertical 9:16 creator tutorial reel about using HeyGen’s new AI Agent feature to make UGC-style content. Keep one male creator presenter seated near the bottom center for most of the video. He has a short beard, baseball cap, casual light t-shirt, and speaks directly to camera with energetic but practical tutorial cadence. The background rotates between UGC-style phone footage, mobile-screen examples, dark HeyGen dashboard screens, AI Agent workflow panels, media-management views, and a final comment CTA. Preserve a mobile-first, scroll-stopping structure: proof first, interface next, conversion close. Lighting on the presenter stays soft and even, with a clean creator-desk feel.

[00:00-00:10.00] Open with a realistic UGC-style female selfie or creator clip filling the upper frame, as if viewed on a phone screen, while the presenter appears lower center and introduces how to create this kind of content using HeyGen’s new AI Agent feature. Keep the frame immediately legible for social media: the viewer should instantly understand that the end goal is ad-ready, creator-native short-form content. Speaker A is upbeat and explanatory, lips visible, medium lip-sync strictness.

[00:10.00-00:22.00] Continue with more proof-driven UGC examples and mobile-native frames. Include finger-pointing or screen-emphasis moments to make the tutorial feel tactile and practical rather than abstract. The presenter keeps speaking and gesturing while showing that the output can pass as social-ready creator content. Use quick cuts with clear result-first momentum.

[00:22.00-00:38.00] Transition into the HeyGen product interface. Show a dark dashboard with AI Agent workflow blocks, setup cards, toggles, and configuration panels. Keep the presenter lower center and have him explain how the feature works in practice. The background should clearly read as real software, not a mockup. Sync sentence accents to UI changes.

[00:38.00-00:49.00] Show deeper operational screens such as a media grid, project organization view, content assets, or an editor-style management panel. The presenter continues with a practical explanation about building, organizing, or scaling UGC outputs through the tool. Maintain a creator-tutorial pace with clean hard cuts and readable interface detail.

[00:49.00-00:57.79] Close with the presenter more dominant in the frame while HeyGen context remains visible behind him. End with a direct CTA asking viewers to comment “AI” for the link. Make the final frame readable, conversion-oriented, and clearly tied to the value already demonstrated.

NEGATIVE PROMPT

Avoid warped phone screens, unreadable dashboard text, messy cutout edges around the presenter, drifting face identity, fake-looking UGC footage, over-animated transitions, robotic narration, slurred speech, lip-sync mismatch, clipping, room echo, low-contrast CTA text, random wardrobe changes, muddy UI panels, flicker, frame jitter, and generic ad visuals that do not feel native to social feeds.

SHOT PROMPTS

- Hook delta: mobile-native UGC proof clip with presenter lower center
- Proof delta: more creator-style examples and finger-point emphasis
- Dashboard delta: dark HeyGen AI Agent setup interface
- Management delta: media grid / project organization view
- CTA delta: presenter-forward finish with comment-for-link ask

SPEECH PACK

Timecoded transcript (best-effort observable reconstruction)
- [00:00.00-00:10.00] Speaker A: “Here’s how to create UGC-style content using HeyGen’s new AI Agent feature.” Emotion: upbeat, hook-first.
- [00:10.00-00:22.00] Speaker A: “This lets you generate social-native creator content much faster while keeping the output usable for marketing.” Emotion: confident, proof-oriented.
- [00:22.00-00:38.00] Speaker A: “Let me show you the HeyGen workflow and how the AI Agent part fits in.” Emotion: practical, tutorial-focused.
- [00:38.00-00:49.00] Speaker A: “From here you can manage the content, examples, or project setup inside the dashboard.” Emotion: tactical, steady pace.
- [00:49.00-00:57.79] Speaker A: “Comment ‘AI’ for the link.” Emotion: punchy CTA close.

TAKE_A
- Keep the wording close to the lines above with creator-marketing energy.

TAKE_B
- Same meaning, slightly faster and more ad-operator focused.

TAKE_C
- Same meaning, calmer and more educational.

Closest audible version
- Exact speech was not transcribed verbatim, so the lines above represent closest observable tutorial intent supported by caption, UI context, and pacing.

Safe paraphrase version
- The reel explains how to use HeyGen AI Agent to create UGC-style content and ends by asking viewers to comment “AI” for the link.
Video
GLOBAL LOCK:
Subject is a Caucasian male in his early 30s, dark wavy hair, well-groomed medium-length beard, expressive brown eyes. He maintains a consistent facial structure across all shots. The visual style is a mix of high-end editorial photography and UGC tutorial footage. Lighting is cinematic with soft key lights and motivated rim lighting. Color grade is professional with deep blacks and vibrant but natural skin tones. Speech is clear, energetic, and instructional, delivered with a warm, authoritative tone.

[00:00–00:01]
Subject: MCU of the man wearing a dark suit, white dress shirt, black tie, and a white baseball cap with a green brim.
Action: Talking directly to the camera. A vertical white rectangular mask moves across his face, revealing a slightly different version of the same scene.
Camera: Static MCU, eye-level.
Lighting: Soft studio lighting, neutral background.
Speech: "This is how you can create..."

[00:01–00:04]
Subject: Rapid montage of AI-generated images. 
1. Man in a dark suit and sunglasses driving a green car at night, "AI MAG" text overlay.
2. Man in a checkered blazer and paisley tie in front of a brick wall.
3. Man in a white short-sleeve shirt with multiple pens in his pocket, standing in a white studio.
Action: Static editorial poses.
Camera: Various (MS, MCU).
Lighting: Cinematic, high contrast, nighttime car lighting, studio softbox.
Grade: Magazine editorial style.

[00:05–00:08]
Subject: A 3x4 grid of 12 different AI portraits of the same man in various outfits (boxing gloves, red car, street style, suit).
Action: Static images.
Overlay: Large bold text "UNLIMITED GENERATIONS" in orange and blue.
Camera: Flat grid layout.
Lighting: Varied per image.

[00:09–00:14]
Environment: Screen recording of the Higgsfield.ai website interface. A cursor moves to click "Image" then "Soul ID Character".
Action: UI navigation.
Speech: "On Higgsfield.ai, go to image and select Soul ID Character..."

[00:15–00:20]
Subject: Picture-in-picture of the man talking (wearing a tan cap and beige shirt) over a screen recording of the "Make Your Own Character" page.
Action: Explaining the process while gesturing.
Speech: "...where you can actually create your own custom character of yourself by uploading a bunch of photos."

[00:21–00:24]
Subject: Montage of AI images with text prompts.
1. Man in a suit drinking from a glass (trippy lens effect).
2. Man in a tan suit with a "Micky Mouse Bag" in a city street.
3. Man in a white tank top and jeans in front of a "Tokyo Red Car".
Action: Posing.
Camera: Full body and MS.
Lighting: Bright daylight, stylized urban lighting.

[00:25–00:34]
Environment: Screen recording of the "Lipsync Studio" interface. Subject's PIP continues.
Action: Selecting "Video", then "Lipsync Studio", uploading an image of himself at the beach, and dragging an audio file named "voiceover.wav".
Speech: "Now you can go to video at the top of the page and select the Lipsync Studio where you can upload your photo and audio..."

[00:35–00:38]
Subject: CU of the man at a tropical beach. He is shirtless, wearing black swimming goggles on his head.
Action: He is lip-syncing perfectly to the audio, smiling slightly.
Environment: Bright blue ocean water with small waves in the background.
Camera: CU, static.
Lighting: Bright, direct sunlight with natural shadows.
Speech: "...and it will combine those two together with the best lip-sync models."

NEGATIVE PROMPT:
Visual: robotic movement, distorted facial features, inconsistent beard growth, blurry textures, flickering background, extra fingers, warped UI elements, low resolution, watermarks.
Speech: robotic monotone, lip-sync delay, muffled audio, background hiss, unnatural pauses, slurred consonants, popping sounds.

SPEECH PACK:
[00:00-00:08]
Transcript: "This is how you can create 25 magazine-ready images of yourself using AI and then you can even lip-sync on top of them with this brand new feature."
TAKE_A: (Energetic, fast-paced) "This is how you can create TWENTY-FIVE magazine-ready images of yourself using AI... and then you can even LIP-SYNC on top of them with this brand new feature!"

[00:09-00:20]
Transcript: "On Higgsfield.ai, go to image and select Soul ID Character where you can actually create your own custom character of yourself by uploading a bunch of photos."
TAKE_A: (Instructional, clear) "On Higgsfield dot A-I, go to image and select Soul I-D Character... where you can actually create your own custom character of yourself... by uploading a bunch of photos."

[00:25-00:38]
Transcript: "Now you can go to video at the top of the page and select the Lipsync Studio where you can upload your photo and audio and it will combine those two together with the best lip-sync models."
TAKE_A: (Helpful, concluding) "Now you can go to video at the top of the page and select the Lipsync Studio... where you can upload your photo and audio... and it will combine those two together with the best lip-sync models."
Video

A creator-led vertical tutorial explains how to build viral AI-generated animal and educational-style short-form channels. The video mixes talking-head narration, screenshots of successful social accounts, earnings proof, and detailed screen-recorded workflow breakdowns. Early on, the presenter highlights a niche built around stylized transparent or skeletal pet creatures, showing example clips, account growth, and monetization screenshots to establish the opportunity. The tutorial then moves into a full production pipeline: using ChatGPT or custom GPTs for concept development, generating animal prompts, building static images in tools like OpenArt or Banana, and then bringing those images into video-generation platforms such as Kling or Veo to create animated clips. The overall presentation is structured as a repeatable AI content system for animal-themed educational or entertainment channels.
Video
by.shlabu
Create a short-form creator tutorial video about how to make cinematic AI clips from simple ideas. The piece should feel like an Instagram Reel or TikTok posted by an AI filmmaking educator, combining direct-to-camera instruction with polished cinematic sample shots and interface cutaways. Use a confident creator host in a dark studio or moody workspace, speaking naturally to camera while explaining a repeatable workflow for generating cinematic AI videos. The pacing should be fast, sharp, and social-first, with frequent visual resets to keep attention high.

Open with a strong hook where the creator talks directly to camera and promises to show viewers how to make cinematic AI clips that feel dramatic, polished, and scroll-stopping. Then cut into multiple example shots that look like finished outputs: moody action moments, dramatic close-ups, atmospheric character scenes, and premium-looking cinematic frames. Intercut those examples with prompt panels, tool UI, timeline views, or settings screens so the workflow feels grounded in real AI video creation rather than abstract inspiration.

The host should stay visually consistent across talking segments: same person, same wardrobe, same lighting setup, same direct creator-teacher tone. Their performance should feel natural and creator-native, not overly scripted. They should gesture casually, point toward on-screen examples, and deliver the lesson with energetic clarity, like someone used to teaching AI video tricks on social media.

The visual design should alternate between two clear modes. Mode one is the tutorial studio setup: dark background, controlled lighting, crisp face detail, shallow depth of field, subtle color accents, and a premium creator-desk atmosphere. Mode two is the cinematic demo footage: dramatic compositions, intentional movement, filmic contrast, moody lighting, and stronger environmental storytelling. Keep cutting between those modes so the audience always sees both the result and the process.

Keep the entire piece optimized for vertical video. For talking-head sections, use close-ups and medium close-ups with subtle push-ins or light handheld energy. For the cinematic examples, vary the framing with wides, dramatic close-ups, push-ins, tracking shots, and controlled motion that sells the idea of “cinematic” without becoming chaotic. Everything should feel curated and premium.

Lighting is important. The host footage should use flattering key light with soft falloff and a clean but moody creator-studio look. The cinematic sample shots should lean harder into contrast, rim light, atmosphere, practicals, and dramatic highlight control. The overall grade should feel modern, contrasty, and polished, with rich blacks, sharp visual separation, and subtle filmic texture.

Include insert shots of prompts, settings, or example workflow screens to reinforce the educational angle. These moments can show how ideas become prompts, how cinematic references are structured, or how the creator chooses scenes and visual style. The UI should feel real and useful, not decorative.

The edit should stay fast and social-first: hook, creator explanation, cinematic example, interface proof, another teaching beat, then more examples. Use cuts, punch-ins, overlays, and visual comparison moments so the viewer always feels momentum. The final result should feel like a practical creator tutorial that teaches viewers how to make cinematic AI clips while also showcasing enough premium output to inspire them to try the workflow themselves.

Ai Brainrot Meme Maker

AI Brainrot Meme Maker is for creators who want the deliberately chaotic meme format that dominates overstimulating short-form feeds. The page should guide them toward examples and prompts that combine mismatched visuals, abrupt pacing, layered captions, TTS narration, and absurd tonal collisions.

The strongest angle is controlled chaos. Users here are not looking for polished comedy or a clean meme layout. They want the overloaded format itself: too much happening at once, fast enough to feel ridiculous, and structured enough to still land. The copy should make that promise clearly.

What this page should make clear: - The format relies on chaos, contrast, and overstimulating pacing. - Layered captions, abrupt cuts, and TTS are common ingredients. - This is built for short-form internet humor, not polished editing. - The goal is to make something instantly legible as brainrot content.

FAQ

Q: What is an AI brainrot meme maker? A: It is a tool for making chaotic short-form meme videos with overloaded pacing and absurd combinations.

Q: What makes a meme feel like brainrot? A: Fast cuts, stacked visuals, TTS voice, random tonal clashes, and intentionally excessive energy.

Q: What is it best for? A: TikTok clips, Reels memes, shitpost-style edits, and chaotic internet humor.