Meta LLM Compiler model, comes in 7B & 13B variants, will soon let you code like never before

Mark Zuckerberg open-sourced Code Llama earlier this year

Reading time icon 2 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

Key notes

  • Meta has launched the Meta LLM Compiler for code optimization and compiler reasoning.
  • It outperforms previous models in compiler tasks and is available in 7B and 13B variants.
  • The models are open-source and can be used for both research and commercial purposes.
Meta LLM Compiler

Meta, Facebook’s parent company, has just launched yet another large language model, and this time, it’s mostly directed towards software and app developers. It’s called Meta LLM Compiler, a “state-of-the-art” model for code optimization and compiler reasoning.

The tech giant describes the family of models are based on Meta Code Llama, and it can emulate compilers, predict code passes, and disassemble code with amazing results. All the models are launched under a permissive license and commercial use for both devs and researchers.

Meta LLM Compiler comes in two different variants of parameters, the 7B variant for low-latency tasks and the 13B for the best results, both with a 16k token context window. Each then comes with the base model and one fire-tuned for code sizing and disassembly (FTD), and you can try them on Hugging Face.

“Models that are accessible to the public can expedite the creation of novel compiler optimization technologies. In turn, this will allow programs to be more efficient and smaller, enhancing the quality of life for all. By making models such as LLM Compiler available, the whole community can explore their potential, pinpoint problems, and rectify any vulnerabilities,” Meta researchers say in its research paper.

The LLM Compiler has been pre-trained on LLVM, x86_64, ARM, and CUDA assembly codes, predicting code size changes and output after optimizations. According to Meta, the latest coding model outperforms Code Llama and GPT-4 Turbo in emulating compiler optimizations, optimizing IR for code size, and disassembly tasks, achieving up to 20% accuracy and 4.88% code size improvement.

Earlier this year, Mark Zuckerberg, the Meta boss, announced that his company is open-sourcing Code Llama. Launched in August 2023, Code Llama supports many popular programming languages, like Python, C++, Java, PHP, Typescript (Javascript), C#, and more, and is available in multiple sizes (7B and 13B) for different latency needs.

Leave a Reply

Your email address will not be published. Required fields are marked *