OpenAI’s new GPT-4.1 AI models focus on coding


OpenAI on Monday launched a new family of models called GPT-4.1. Yes, “4.1” — as if the company’s nomenclature wasn’t confusing enough already.

There’s GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano, all of which OpenAI says “excel” at coding and instruction following. Available through OpenAI’s API but not ChatGPT, the multimodal models have a 1-million-token context window, meaning they can take in roughly 750,000 words in one go (longer than “War and Peace”).

GPT-4.1 arrives as OpenAI rivals like Google and Anthropic ratchet up efforts to build sophisticated programming models. Google’s recently released Gemini 2.5 Pro, which also has a 1-million-token context window, ranks highly on popular coding benchmarks. So do Anthropic’s Claude 3.7 Sonnet and Chinese AI startup DeepSeek’s upgraded V3.

It’s the goal of many tech giants, including OpenAI, to train AI coding models capable of performing complex software engineering tasks. OpenAI’s grand ambition is to create an “agentic software engineer,” as CFO Sarah Friar put it during a tech summit in London last month. The company asserts its future models will be able to program entire apps end-to-end, handling aspects such as quality assurance, bug testing, and documentation writing.

GPT-4.1 is a step in this direction.

“We’ve optimized GPT-4.1 for real-world use based on direct feedback to improve in areas that developers care most about: frontend coding, making fewer extraneous edits, following formats reliably, adhering to response structure and ordering, consistent tool usage, and more,” an OpenAI spokesperson told TechCrunch via email. “These improvements enable developers to build agents that are considerably better at real-world software engineering tasks.”

OpenAI claims the full GPT-4.1 model outperforms its GPT-4o and GPT-4o mini models on coding benchmarks including SWE-bench. GPT-4.1 mini and nano are said to be more efficient and faster at the cost of some accuracy, with OpenAI saying GPT-4.1 nano is its speediest — and cheapest — model ever.

GPT-4.1 costs $2 per million input tokens and $8 per million output tokens. GPT-4.1 mini is $0.40/M input tokens and $1.60/M output tokens, and GPT-4.1 nano is $0.10/M input tokens and $0.40/M output tokens.

According to OpenAI’s internal testing, GPT-4.1, which can generate more tokens at once than GPT-4o (32,768 versus 16,384), scored between 52% and 54.6% on SWE-bench Verified, a human-validated subset of SWE-bench. (OpenAI noted in a blog post that some solutions to SWE-bench Verified problems couldn’t run on its infrastructure, hence the range of scores.) Those figures are slightly under the scores reported by Google and Anthropic for Gemini 2.5 Pro (63.8%) and Claude 3.7 Sonnet (62.3%), respectively, on the same benchmark.

In a separate evaluation, OpenAI probed GPT-4.1 using Video-MME, which is designed to measure the ability of a model to “understand” content in videos. GPT-4.1 reached a chart-topping 72% accuracy on the “long, no subtitles” video category, claims OpenAI.

While GPT-4.1 scores reasonably well on benchmarks and has a more recent “knowledge cutoff,” giving it a better frame of reference for current events (up to June 2024), it’s important to keep in mind that even some of the best models today struggle with tasks that wouldn’t trip up experts. For example, many studies have shown that code-generating models often fail to fix, and even introduce, security vulnerabilities and bugs.

OpenAI acknowledges, too, that GPT-4.1 becomes less reliable (i.e. likelier to make mistakes) the more input tokens it has to deal with. On one of the company’s own tests, OpenAI-MRCR, the model’s accuracy decreased from around 84% with 8,000 tokens to 50% with 1,024 tokens. GPT-4.1 also tended to be more “literal” than GPT-4o, says the company, sometimes necessitating more specific, explicit prompts.


Leave a Reply

Your email address will not be published. Required fields are marked *