FramePack is a video generation method that allows you to create long AI videos with limited VRAM. If you don’t have a decent Nvidia GPU card, you can use FramePack on the Google Colab online service. It’s a cost-effective option, costing only around $0.20 per hour to use.
Table of Contents
What is FramePack?
FramePack overcomes the memory limitations of many video generators, such as Wan 2.1, Hunyuan, and LTX Video, by using a fixed transformer context length regardless of the total video length.
This means generating a 1-minute clip consumes the same VRAM as a 1-second one.
See my original FramePack tutorial for a full intro.
What is Google Colab?
Google Colab (also known as Google Colaboratory) is an interactive computing service provided by Google. It is a Jupyter Notebook environment that allows you to execute code.
This notebook only runs on the L4 runtime (and probably the A100) in paid tiers, but not on the T4 runtime in the free tier. At the time of writing, it costs $0.20 per hour to use.
They have three paid plans. You need one of them. I recommend the Colab Pro plan because you need the high RAM option for my other notebooks.

FramePack on Google Colab
Step 1: Open the Colab Notebook
Go to the FramePack_Colab page. Give it a star. (Okay, this is optional…)
Click on the Open in Colab link to open the notebook.

Step 2: Start the notebook

Run the notebook by clicking the Play button under the FramePack cell. It will take some time to install FramePack and download the models.
Step 3: Start FramePack
Visit the gradio.live public link to start FramePack.

You should see the FramePack app.

Step 4: Upload the initial image
Upload an image to FramePack’s Image canvas. You can use the test image below to test your setup.

Step 5: Enter the prompt
Enter a prompt that matches the image in the Prompt text box.
A girl handing out a flower

Step 6: Generate a video
Click Start Generation to generate a video.
It will generate the end of the video first, and extend to the beginning. On the console, you will see several progress bars before a video is generated.
For a complete setting guide and instructions to generate longer videos, see my FramePack tutorial.
Thanks very much, Andrew, we asked for a notebook and you gave us one 🙂
FramePack seems to work well and with the A100 GPU it was using about 28 of the 40 Gb memory allocation. Your 5 sec test video took about 8 mins without TeaCache and about 4-5 mins with. A 10 sec video took about 8 mins with TeaCache and so far, I haven’t seen any significant reduction in quality using the faster version. This is all really good!
It seems to work with different aspect ratio images as well as the 7:4 you used. I tried using a high res image to give a better final result with the Hunyuan model and that worked well.
A couple of minor suggestions for the notebook: add ngrok (incl secrets) as a channel and add the time taken to complete the prompt.
It will be interesting to see how this develops and whether future versions can be used with ComfyUI and allow other video models.
*and a default save image to /outputs or somewhere, I keep forgetting to save the videos…
Thanks for sharing the stats! I will add ngrok.
The saved videos can be assessed through the file explorer sidebar: /content/FramePack/outputs