3 Comments
User's avatar
Subendhu Rongali's avatar

This is really impressive! Do you have any metrics on long context benchmarks such as RULER or NIAH? That seems to be the last advantage an attention mechanism would hold, compared to a state-space approach like this.

Expand full comment
Eugene Cheah's avatar

For RWKV v7 paper ( https://arxiv.org/pdf/2503.14456 )

We covered 3B models that has been fully trained with long context to pass 32k NIAH tests.

With evidence to show that context length scales with param size.

We forecast for a 70B given sufficient long context data, it should hold all the way to 512K context length without issues

Note: the qwerky-v1 models are not long context trained, but the upcoming qwerky-v2 is planned to be long context trained

Expand full comment
Howard's avatar

nice work, really close to Qwen2.5 this time

Expand full comment