Permutation-Invariant Neural Networks for Reinforcement Learning

·David Ha (Otoro)··

Reinforcement learning agents typically perform poorly if provided with inputs that were not clearly defined in training. A new approach enables RL agents to perform well, even when subject to corrupt, incomplete, or shuffled inputs. Note: This blog post about our paper is written by Yujin Tang and myself, and was originally posted on Google AI Blog. It has been cross-posted here for archival purposes. Introduction “The brain is able to use information coming from the skin as if it were coming f...

Read full article →

Related Articles

Accelerating Gemma 4: faster inference with multi-token prediction drafters
amrrs · Hacker News · 4d ago
OpenAI’s o1 correctly diagnosed 67% of ER patients vs. 50-55% by triage doctors
donsupreme · Hacker News · 6d ago
ProgramBench: Can language models rebuild programs from scratch?
jonbaer · Hacker News · 2d ago
ZAYA1-8B matches DeepSeek-R1 on math with less than 1B active parameters
steveharing1 · Hacker News · 2d ago
A couple million lines of Haskell: Production engineering at Mercury
unignorant · Hacker News · 6d ago