GPT-J-6B: 6B JAX-Based Transformer

·Aran Komatsuzaki··

Summary: We have released GPT-J-6B, 6B JAX-based (Mesh) Transformer LM (Github).GPT-J-6B performs nearly on par with 6.7B GPT-3 (or Curie) on various zero-shot down-streaming tasks.You can try out this Colab notebook or free web demo.This library also serves as an example of model parallelism with xmap on JAX. Below, we will refer to GPT-J-6B by GPT-J in short. Why does this project matter? GPT-J is the best-performing publicly available Transformer LM in terms of zero-shot performance on variou...

Read full article →

Related Articles

Accelerating Gemma 4: faster inference with multi-token prediction drafters
amrrs · Hacker News · 4d ago
OpenAI’s o1 correctly diagnosed 67% of ER patients vs. 50-55% by triage doctors
donsupreme · Hacker News · 6d ago
ProgramBench: Can language models rebuild programs from scratch?
jonbaer · Hacker News · 2d ago
ZAYA1-8B matches DeepSeek-R1 on math with less than 1B active parameters
steveharing1 · Hacker News · 2d ago
A couple million lines of Haskell: Production engineering at Mercury
unignorant · Hacker News · 6d ago