OpenAI, the parent company of ChatGPT, has launched two new AI language models that users can run locally on their own computers and customise to their preferences, marking a shift towards increased transparency.
The models, named gpt-oss-120b and gpt-oss-20b, have 120 billion and 20 billion parameters respectively. Parameters are essentially variables that an AI model uses to learn and make predictions; the more parameters, the smarter but also more resource-intensive the model.
These models are small enough to be operated by hobbyists. The 120-billion-parameter version requires a modern NVIDIA graphics card and aims to match the capabilities of o4-mini, one of ChatGPT’s most advanced models. The smaller model can run on a standard laptop with just 16 GB of RAM but is less powerful.
This marks OpenAI’s first "open weight" model release in six years. Such models can be customised as their internal commands can be modified, but this does not mean the company is releasing the source code and training data.
OpenAI felt compelled to release these models due to increasing competition from the Chinese start-up DeepSeek, whose AI models are largely open and comparable in performance to their American counterparts.
However, open models come with security implications; they are more vulnerable to manipulation and, theoretically, could be misused for dangerous or illegal purposes. As a result, the release of these models was delayed in March. OpenAI stresses that it has thoroughly tested the new models and insists there are no major risks.

