GPU Environment Smoke Test
Validate the GPU lab environment: terminal, file operations, PyTorch, CUDA, and model loading.
What you'll learn
- 1Verify GPU Access
- 2Create a Tensor on GPU
- 3Load a Model with Transformers
- 4Train a Tiny Model (multi-file import)
What this GPU smoke-test lab verifies
Across four quick steps you'll verify that everything downstream of a Preporato GPU lab is actually wired up correctly. Step 1 calls torch.cuda.is_available(), reads the GPU name with torch.cuda.get_device_name(0), and prints total VRAM in gigabytes. Step 2 allocates two 1000x1000 tensors directly on CUDA, runs a matmul timed with torch.cuda.Event start/end records, repeats the same matmul on CPU, and confirms GPU wins. Step 3 loads GPT-2 through AutoModelForCausalLM.from_pretrained('gpt2'), moves it to CUDA, runs a 50-token generate, and reports VRAM in use. Step 4 imports a helper module from the workspace (proving multi-file Python imports work) and trains a tiny XOR network to >90% accuracy with BCE loss and Adam.
This is a pre-flight check, not a teaching lab. The point is to confirm in under 10 minutes that your browser terminal works, file operations in the workspace work, PyTorch sees the GPU, VRAM is at least 8 GB, and the Hugging Face cache can pull a model. If any step fails, you know the environment is broken before you sink an hour into a real lab and hit the same issue deep inside it. Total walkthrough: roughly 10 minutes, no prerequisites beyond basic Python.
Frequently asked questions
What should I do if Step 1 reports cuda_available = False?
cuda_available = False?Why does the lab check that GPU is faster than CPU in Step 2?
.to('cpu') somewhere; the program runs fine but you lose all the GPU speedup without noticing. The matmul timing comparison makes that visible — if GPU isn't clearly faster than CPU on a 1000x1000 matmul, something is off with how the tensor got dispatched.Why is VRAM required to be at least 8 GB?
What is data.py in Step 4 and why does the lab care about importing it?
data.py in Step 4 and why does the lab care about importing it?get_xor_data('cuda') function that returns XOR inputs and labels on GPU. The point of the step isn't the XOR task — it's to verify that multi-file Python imports work inside the Jupyter kernel. Several real labs split utility code into .py files and import it from the notebook, so if this step fails, the real labs will fail on their first import cell.