
Who We Are
Xavi.app is a Canadian-owned AI infrastructure company based in Prince George, BC. We design, build and operate high-performance GPU clusters and developer-friendly APIs that let businesses of any size train, fine-tune and deploy large-scale machine-learning models—without wrestling with hardware or sky-high cloud costs.
Our Story
Xavi.app was born out of frustration with the price-performance gap between hobby-grade GPUs and hyperscale clouds. After years of using various AI services elsewhere, we realised Northern BC’s access to inexpensive hydro power could give Canadian startups a local alternative to US-centric providers. We plan to rack our first A100 pod by mid 2025.
What We Do
-
GPU Cloud & Bare-Metal Rental
On-demand V100, A100 40/80 GB and H100 80 GB nodes, billed by the minute or reserved monthly. -
Inference-as-a-Service
One-click endpoints for popular open-source LLMs (Llama 3, Mistral, Gemma, etc.) with automatic sharding, quantisation options and usage-based billing. -
Managed Training Pipelines
Containerised environments (Docker & Singularity) with pre-tuned NCCL, CUDA and PyTorch stacks. -
Canadian Sovereign Hosting
All compute stays on-shore in Tier III data-centres powered by 100 % renewable hydro. Ideal for projects subject to PIPEDA, HIPAA or provincial privacy rules. -
DevOps & Integration Support
From Terraform modules to custom Kubernetes operators, our engineers help you wire ML workloads into CI/CD pipelines or on-premise hybrid clusters.
Why Choose Xavi.app?
Xavi.app |
Public Cloud Giants |
---|---|
Pricing: flat, transparent CAD rates; no egress fees within Canada | Layered on-top charges and region premiums |
Latency: ≤5 ms for Western Canada | 40–80 ms cross-border hops |
Support: Matrix chat with real engineers | Ticket queues & chatbots |
Sustainability: BC renewable grid, 0.04 kg CO₂/kWh | Mixed energy portfolios |
Looking Ahead
We’re actively expanding into:
-
Edge AI appliances for smart-city, industrial IoT and telco deployments.
-
Federated fine-tuning so customers can train on sensitive data without it ever leaving their VPC.
-
Open-source contributions to the Kubernetes & Rust ML tooling ecosystem.