XAI

Grok-4.3

xAI's flagship: 1M context, always-on reasoning, video input

Category
Language
Modality
Text · Tool use
Context
Released
Strengths

What it's the best tool for

  • Reasoning is always on for every request
  • 1M-token context, 4× bigger than Grok 4.20
  • Native video input in the API — first for xAI
  • +300 Elo on agentic tasks vs Grok 4.20
  • 40% cheaper than the previous flagship
  • Clean LaTeX and strict instruction following
Limitations

When to reach for something else

  • No native image or video generation — input only
  • Requests above 200K tokens billed at a higher tier
  • Content filters lighter than Claude, but still active
  • Reasoning is always-on — slower than plain chat on trivial prompts
Sample output

How Grok-4.3 responds

Prompt
Output Euclid's proof of the infinitude of primes in LaTeX and explain each step in plain language.
Grok-4.3
\textbf{Theorem.} There are infinitely many primes. \textbf{Proof.} Assume p_1, ..., p_n are all primes. Consider N = p_1 · ... · p_n + 1. Division by any p_i leaves remainder 1. So N has a prime factor outside the list — contradiction. Takeaway: any finite list of primes can be extended, hence the set of primes is infinite.
Where teams use it

Four scenarios where it pays for itself

01
Long-document analysis
Contracts and papers up to 1M tokens in one shot
02
Production agents
Multi-step pipelines with minimal hallucinations
03
Math research
LaTeX outputs with step-by-step reasoning
04
Video analysis
Inspect frames and scenes directly via the API
About model

More about Grok-4.3

Grok 4.3 Online — xAI's New Flagship on NetRoom

Grok 4.3 is xAI's April 2026 release that replaced the Grok 4.20 beta. The headline changes: reasoning is now permanently on, the context window jumped to 1 million tokens, the API gained native video input, and pricing dropped roughly 40% per million tokens. Spin it up on NetRoom in your browser — no VPN, no foreign card.

What changed from Grok 4.20

Context grew from 256K to 1M tokens, so Grok 4.3 swallows whole books, codebases and multi-step agent chains in a single request. Reasoning is no longer a toggle — every response goes through an internal chain of thought, which sharply improves factual accuracy and cuts hallucinations. xAI also opened video input at the API level for the first time in the Grok lineup. Output speed sits around 110 tokens per second, above average for reasoning models in this price tier.

Where it shines

On agentic benchmarks the model gained over 300 Elo on GDPval-AA versus Grok 4.20, scored 98% on τ²-Bench Telecom and 81% on IFBench. It earned a 53 on the Artificial Analysis Intelligence Index — top of its price tier. Best fit: long-document analysis, math-heavy research, production agents, codebase exploration and video-frame inspection via the API.

How it compares to Claude and GPT

Versus Claude Sonnet and GPT-5, Grok 4.3 ships lighter content filters and is significantly cheaper on long-context jobs. On strict instruction following and multi-step reasoning it trades blows with the market leaders. On heavy coding tasks it still trails dedicated coding models that score above 50 on Coding Index.

Pricing and getting started

On NetRoom billing is in rubles, no foreign card needed. Requests above 200K tokens are billed at a higher tier — keep that in mind for very long contexts. Sign up, pick Grok 4.3 from the model list and send your first prompt. Available from the browser, mobile client and API — no VPN.

Try Grok-4.3
right now

Free access to basic models. No card, no obligations.