Google's localllm is finally here. As devs, you don't need to worry about lack of GPUs anymore

A big boost especially for those working with large language models (LLMs). 

Reading time icon 1 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

Key notes

  • Google has finally launched localllm for developers.
  • It allows devs to run LLMs locally on CPUs and memory, right within Google Cloud Workstations.
  • You can get started with localllm through its GitHub repository.

Google has finally launched localllm for developers, and it can be a big boost especially for those working with large language models (LLMs). 

The major hurdle for devs is the scarcity of GPUs, which are traditionally required to run these powerful AI models. This new localllm, however, changes the game by allowing developers to run LLMs locally on CPUs and memory, right within Google Cloud Workstations.

The tech company says there is no more expensive GPU dependency when running localllm, and since you can develop AI-based apps directly within the Google Cloud ecosystem, it also could efficiently cut your costs.

You can get started with localllm through its GitHub repository. After that, you can use quantized models, run them with localllm, and then deploy them on Cloud Workstations to make the most out of its managed development environment.

“By eliminating the dependency on GPUs, you can unlock the full potential of LLMs for your application development needs,” Google promises. 

Speaking of devs, Microsoft, Google’s competitor, is also now rolling out Azure OpenAI Assistants for public preview. This makes it easier for app developers to build features similar to helpful copilot assistants.

User forum

0 messages