Are you talking about gemma.cpp? Then no, they didn't.
The claim is correct but not related to gemma
> Run LLMs locally on Cloud Workstations. Uses:
> Quantized models from [Huggingface]
> llama-cpp-python's webserver
But sure, the blog post doesn't mention it.
Are you talking about gemma.cpp? Then no, they didn't.