Laptop for video editing and ML

Hi. I would really appreciate if anyone could give me some advice for a laptop.

TL;DR, is the 5070 vram issue a major problem for ai/video editing workloads on laptops such as the maxed out yoga pro9i and legion 7i? Should I get a yoga, legion, Zephyrus G16(only 32gb ram, but 5070ti), or wait for the possibility of more Proart p16 models to come out (5070ti model).

I’m looking for a portable laptop (school, travel often, no games) that will be used for 4+ years. I need it for data science (training simple ai models and running some local LLMs), video editing, programming, and daily tasks. I’m considering a macbook, but am wondering if pytorch/ai stuff will have problems on macos compared to CUDA with windows.

The Yoga pro 9i with rtx 5070, 64gb ram, tandem oled is my top pick, but I’m worried about the 8 gb vram bottlenecking for ai/creator work and the lack of pen support. Would you recommend the Proart p16 rtx 5070 over the yoga? Does the Proart run any noticeably faster or feel any more premium? These two laptops are neck and neck and trade blows in their pros/cons.

The Proart p16 5090 is too expensive for me, and the ideal laptops I have in mind are the rtx 5070ti and 5080 versions of the P16, which I’ve heard are available in other countries (I’m in US). However, I don’t know if/when they’ll come out here and would like to get a new laptop soon. It’s also hard finding good deals on Proart laptops.

The legion 7i and Zephyrus g16 are good alternatives, but I’d prefer to not have gamer features or branding. However, the legion caps at a 5070 just like the yoga. Is the legion’s upgradeable ram and hx cpu favorable over the yoga’s tandem oled? The hx cpu has worse battery life and creates lots of heat. The Zephyrus 5070ti also works for me, but 32gb of ram might be a problem with what I’m doing. G16 5080/64gb is the cheapest G16 with 64gb ram but is too expensive.

Should I wait for the rtx 5070ti/5080 proarts (they might not even come), or buy a 5070 yoga, 5070 proart, legion, or g16 right now? I’ve done tons of research and would like to buy ASAP before black friday deals are gone. It’s not that important for me to buy now, but I’d like a good deal and can only wait ~4 months. I’m new to this workflow, and don’t know what out of VRAM, ram, and screen quality to be the weaker point I’m okay with, and what to prioritize instead.

Thank you so much for your time. I would really appreciate any help!

As an AIML student, I can offer a recommendation based on the specific technical trade-offs you will face.

The decision comes down to Processing Speed (CUDA) vs. Model Capacity (Unified Memory).

  1. The Speed Argument (CUDA): This is the industry gold standard for AI optimization. A Windows laptop with an RTX 5080 (16GB VRAM) will significantly outperform a MacBook Pro in raw speed. For example, when running smaller models (like Llama 3 8B), the RTX 5080 will generate tokens roughly 40–50% faster than Apple’s M5 chip because of its dedicated high speed memory.
  2. The Capacity Argument (Apple’s Unified Memory): However, the RTX 5080 hits a hard wall at 16GB of VRAM. If you want to load larger, more capable models (like a quantized 30B parameter model), the RTX 5080 simply cannot run them- it lacks the vram. This is where the MacBook shines. Apple’s Unified Memory allows the GPU to access the entire system RAM. A MacBook with 32GB (or more) of RAM can easily load and run these massive models that the RTX 5080 can’t touch. You trade some speed for the ability to run much smarter, larger models.
  3. Software & Ecosystem: While CUDA is king for training models, Apple’s Metal (MPS) ecosystem has matured. Frameworks like PyTorch, TensorFlow, and Hugging Face run natively on macOS now, and Apple-exclusive tools like MLX are highly efficient for local inference. Unless you are training deep learning models from scratch (which is rare on a laptop anyway due to heat/power constraints), the Mac is a fully capable development machine.

Since you prioritize portability, aesthetics, and 4+ years of use (longevity), and your LLM work is focused on running local models (inference) and simple training rather than heavy custom training, Apple (Metal/MPS) is the more practical and reliable choice for your overall quality of life.

If you are a hardcore student or researcher whose primary task is heavy model training, CUDA remains the better choice, but you must find a 12GB+ VRAM model and accept the trade-offs in portability and battery life.

If you choose Windows: You absolutely need VRAM. Aim for the RTX 5070 Ti (12GB) minimum, but the RTX 5080 (16GB) is ideal. The ASUS ProArt P16 does have a 5080 configuration and is an excellent choice( unfortunately not in the US), as is the Zephyrus G16. If you choose Mac: Prioritize RAM. Get an M5 MacBook Pro with at least 32GB of Unified Memory. If you can wait, the upcoming M5 Pro/Max chips will offer even better performance.

Zephyrus G16 Is the one to choose in my opinion and RTX 5070TI is the sweet spot for that configuration, although is also available with RTX 5080.

Thanks for some insight!

I think I’d prefer a CUDA device because a laptop isn’t an ideal device to do large model work anyways… I guess my laptop will be for smaller model work. Do you think I should prioritize 32→64 gb of ram and a nicer screen vs. 5070→5070ti for video editing?

Do you know where I might find the 5070ti or 5080 versions of the Proart? I like that laptop more than the Zephyrus.

p16 only comes with RTX 5070 Or RTX 5090 as far as I’m aware of.

I’ve found some reddit posts talking about 5070ti/5080 versions, such as this one: https://www.reddit.com/r/ASUS/comments/1ndht65/proart_p16_with_5080_and_120hz_4k_finally/. It looks like those versions aren’t in the US… I found a 5080 for sale in the UK, but it’s more expensive than the US 5090. The 5070ti would be the perfect laptop for me.