Need smarter insights in your inbox? Join our weekly newsletters to get solely what issues to enterprise AI, information, and safety leaders. Subscribe Now
TikTok is making headlines once more right now after the White Home joined the favored social media utility — however its mother or father firm ByteDance, a Chinese language internet large, additionally had a shock announcement up its sleeve.
The corporate’s Seed Staff of AI researchers right now launched Seed-OSS-36B on AI code sharing web site Hugging Face.
Seed-OSS-36B is new line of open supply, massive language fashions (LLM) designed for superior reasoning, and developer-focused usability with a longer token context — that’s, how a lot data the fashions can settle for as inputs after which output in a single alternate — than many competing LLMs from U.S. tech firms, even leaders reminiscent of OpenAI and Anthropic.
The gathering introduces three most important variants:
AI Scaling Hits Its Limits
Energy caps, rising token prices, and inference delays are reshaping enterprise AI. Be part of our unique salon to find how high groups are:
- Turning vitality right into a strategic benefit
- Architecting environment friendly inference for actual throughput good points
- Unlocking aggressive ROI with sustainable AI methods
Safe your spot to remain forward: https://bit.ly/4mwGngO
- Seed-OSS-36B-Base with artificial information
- Seed-OSS-36B-Base with out artificial information
- Seed-OSS-36B-Instruct
In releasing each artificial and non-synthetic variations of the Seed-OSS-36B-Base mannequin, the Seed Staff sought to steadiness sensible efficiency with analysis flexibility.
The synthetic-data variant, skilled with extra instruction information, persistently delivers stronger scores on normal benchmarks and is meant as a higher-performing general-purpose choice.
The non-synthetic mannequin, against this, omits these augmentations, creating a cleaner basis that avoids potential bias or distortion launched by artificial instruction information.
By offering each, the crew provides utilized customers entry to improved outcomes whereas guaranteeing researchers retain a impartial baseline for learning post-training strategies.
In the meantime, the Seed-OSS-36B-Instruct mannequin differs in that it’s post-trained with instruction information to prioritize activity execution and instruction following, moderately than serving purely as a basis mannequin.
All three fashions are launched underneath the Apache-2.0 license, permitting free use, modification, and redistribution by researchers and builders working for enterprises.
Which means they can be utilized to energy business purposes, inner to an organization or exterior/customer-facing, with out paying ByteDance any licensing charges or for utility programming interface (API) utilization.
This continues the summer season 2025 development of Chinese language firms transport highly effective open supply fashions with OpenAI trying to meet up with its personal open supply gpt-oss duet launched earlier this month.
The Seed Staff positions Seed-OSS for worldwide purposes, emphasizing versatility throughout reasoning, agent-like activity execution, and multilingual settings.
The Seed Staff, fashioned in 2023, has focused on constructing basis fashions that may serve each analysis and utilized use instances.
Design and core options
The structure behind Seed-OSS-36B combines acquainted design selections reminiscent of causal language modeling, grouped question consideration, SwiGLU activation, RMSNorm, and RoPE positional encoding.
Every mannequin carries 36 billion parameters throughout 64 layers and helps a vocabulary of 155,000 tokens.
One of many defining options is its native long-context functionality, with a most size of 512,000 tokens, designed to course of prolonged paperwork and reasoning chains with out efficiency loss.
That’s twice the size of OpenAI’s new GPT-5 mannequin household and is roughly equal to about 1,600 pages of textual content, the size of a Christian Bible.
One other distinguishing factor is the introduction of a considering funds, which lets builders specify how a lot reasoning the mannequin ought to carry out earlier than delivering a solution.
It’s one thing we’ve seen from different latest open supply fashions as nicely, together with Nvidia’s new Nemotron-Nano-9B-v2, additionally out there on Hugging Face.
In apply, this implies groups can tune efficiency relying on the complexity of the duty and the effectivity necessities of deployment.
Budgets are beneficial in multiples of 512 tokens, with 0 offering a direct response mode/
Aggressive efficiency on third-party benchmarks
Benchmarks printed with the discharge place Seed-OSS-36B among the many stronger massive open-source fashions. The Instruct variant, particularly, posts state-of-the-art leads to a number of areas.
- Math and reasoning: Seed-OSS-36B-Instruct achieves 91.7 % on AIME24 and 65 on BeyondAIME, each representing open-source “state-of-the-art” (SOTA).
- Coding: On LiveCodeBench v6, the Instruct mannequin information 67.4, one other SOTA rating.
- Lengthy-context dealing with: On RULER at 128K context size, it reaches 94.6, marking the very best open-source outcome reported.
- Base mannequin efficiency: The synthetic-data Base variant delivers 65.1 on MMLU-Professional and 81.7 on MATH, each state-of-the-art leads to their classes.
The no-synthetic Base model, whereas barely behind on many measures, proves aggressive in its personal proper.
It outperforms its artificial counterpart on GPQA-D, offering researchers with a cleaner, instruction-free baseline for experimentation.
For enterprises evaluating open choices, these outcomes recommend Seed-OSS presents robust potential throughout math-heavy, coding, and long-context workloads whereas nonetheless offering flexibility for analysis use instances.
Entry and deployment
Past efficiency, the Seed Staff highlights accessibility for builders and practitioners. The fashions will be deployed utilizing Hugging Face Transformers, with quantization help in each 4-bit and 8-bit codecs to scale back reminiscence necessities.
In addition they combine with vLLM for scalable serving, together with configuration examples and API server directions.
To decrease obstacles additional, the crew consists of scripts for inference, immediate customization, and power integration.
For technical leaders managing small groups or working underneath funds constraints, these provisions are positioned to make experimentation with 36-billion-parameter fashions extra approachable.
Licensing and issues for enterprise decision-makers
With the fashions supplied underneath Apache-2.0, organizations can undertake them with out restrictive licensing phrases, an vital issue for groups balancing authorized and operational issues.
For resolution makers evaluating the open-source panorama, the discharge brings three takeaways:
- State-of-the-art benchmarks throughout math, coding, and long-context reasoning.
- A steadiness between higher-performing synthetic-trained fashions and clear analysis baselines.
- Accessibility options that decrease operational overhead for lean engineering groups.
By inserting robust efficiency and versatile deployment underneath an open license, ByteDance’s Seed Staff has added new choices for enterprises, researchers, and builders alike.