Dive Brief:
- Meta released Code Llama 70B, the largest model in the Code Llama family, according to a Monday announcement.
- The new offering is available in three versions: a foundational code model, a model specialized for Python and a model fine-tuned for understanding natural language instructions, according to Meta’s blog post.
- The 70B model works best for coding assistance, while the smaller Code Llama models are better suited for tasks that require low latency, such as real-time code completion, Meta said.
Dive Insight:
Coding assistance is an increasingly common target for large language model providers.
Software providers like SAP and Salesforce have aimed solutions at developer productivity and leading generative AI vendors OpenAI and Anthropic have updated models to strengthen coding capabilities.
Meta first released its family of coding-specific LLMs in August, featuring sizes ranging from 7 billion to 34 billion parameters. Given its size, Code Llama 70B was trained on one trillion tokens of code and related data, while other versions of Code Llama used 500 billion tokens in training, Meta said.
“Writing and editing code has emerged as one of the most important uses of AI models today,” Meta CEO Mark Zuckerberg said in a Facebook post Monday. “The ability to code has also proven to be important for AI models to process information in other domains more rigorously and logically.”
Despite the growing popularity of AI-powered coding tools, these solutions can also bring problems to enterprises. More than half of organizations encounter security issues with AI-generated code sometimes or frequently, according to a Snyk survey.