Ask about experience, focus areas, certifications, or how to get in touch. The assistant runs entirely in your browser—no data is sent to any server.
Pick a model below and load it once. Everything runs locally via WebGPU. Smaller models load faster; larger ones can give richer answers.
Choose a model and click Load. Requires a WebGPU-capable browser (e.g. Chrome, Edge).
Tip: start with Llama 3.2 1B for a good balance of speed and quality. Use 3B if you have more memory and want richer answers.
Try one of these questions:
If the assistant cannot load in your browser, you can still see a short static summary from Alessandro's profile here.