Meta’s new AI lab did a big step this month. The team shipped the first key AI models for internal use only.
The unit is called Meta Superintelligence Labs, and CTO Andrew Bosworth says the early systems already look “very good”.
- The lab was set up in 2025 as Meta reshuffled AI leadership.
- It now has its first internal models in production barely six months after real work began.
- The push comes after criticism of the company’s earlier Llama 4 model, which often lagged rivals in benchmarks and user tests.
Bosworth spoke on the sidelines of the World Economic Forum meeting in Davos, where he framed 2025 as a “tremendously chaotic year” of building labs, infrastructure, and compute capacity to even reach this point.
Inside Meta Superintelligence Labs
The new unit sits at the center of Meta’s attempt to catch up in the race against OpenAI, Google and others.
- The team is tasked with building the next wave of large‑scale text, image and video models.
- Meta has been hiring aggressively into this group with very high compensation offers to lure senior AI researchers.
Bosworth did not name the exact systems that just landed internally, but previous leaks point to at least two codenames:
- A text‑focused AI model called “Avocado”, aimed at natural language tasks and expected to debut publicly in the first quarter.
- An image‑ and video‑centric model called “Mango”, designed to generate media for creative and social experiences.
Those models are believed to sit on top of Meta’s upgraded infrastructure stack, designed during the 2025 build‑out to support large‑scale training runs and high‑volume consumer inference.
From Llama 4 backlash to “post‑training” grind
Meta’s earlier flagship, Llama 4, faced sustained criticism for quality gaps versus competing foundation models, especially on complex reasoning and safety‑aligned outputs.
That pushback made the next generation of systems a reputational test, not just a technical milestone.
Bosworth stressed that even after training finishes, much of the hard work still lies in the “post‑training” phase.
- Engineers must compress, optimize and align models for internal tools and consumer apps.
- Teams also have to harden safety filters and build interfaces that work inside Meta products like Facebook, Instagram, WhatsApp and Ray‑Ban glasses.
He framed the new lab’s progress as early validation that Meta’s heavy 2025 investments in compute and talent are now starting to return value.
Why 2026–27 matter for consumers
Bosworth expects 2026 and 2027 to be the years when consumer AI patterns stabilize.
He argued that current‑generation systems can already respond to everyday queries that users ask with family and kids, even if very complex questions still need progress.
- Over the next two years, Meta plans to turn these internal models into visible user‑facing features.
- The goal is to embed AI deeply into feeds, messaging, productivity helpers and AR hardware.
One flagship example is Meta’s AI‑equipped Ray‑Ban Display smart glasses.
The company recently paused their broader international rollout to focus on meeting strong United States demand, suggesting Meta wants a controlled but high‑quality early user base as AI capabilities improve.
Strategic stakes for Meta and investors
The internal launch is small in public visibility but big in signal for the market.
Meta is under pressure to show that it can build frontier‑grade AI that drives fresh engagement and new revenue streams, not just defend legacy social products.
Key implications:
- Product roadmap: Internal models are the prerequisite for new AI assistants, creation tools and ad products across Meta’s family of apps.
- Competitive position: If Avocado and Mango deliver strong real‑world performance, Meta could narrow or close the perceived gap with leading closed models in the next release cycle.
- Hardware + AI flywheel: Glasses, mobile apps and future AR devices give the company a unique surface to deploy multimodal models at scale.
For now, Bosworth is careful to say the technology is “not yet finished.”
But the message from Davos is clear: the lab is up, models are live inside Meta, and 2026–27 will be the window where users see whether this new AI strategy really works.
(Source: indianexpress, reuters)
