kernels-community/flash-attn3

kernelskernelslicense:apache-2.0region:usapache-2.0
224.3K

make sure kernels is installed: pip install -U kernels

from kernels import get_kernel

kernel_module = get_kernel("kernels-community/flash-attn3") # <- change the ID if needed flash_attn_combine = kernel_module.flash_attn_combine

flash_attn_combine(...)


## Available functions

- `flash_attn_combine`
- `flash_attn_func`
- `flash_attn_qkvpacked_func`
- `flash_attn_varlen_func`
- `flash_attn_with_kvcache`
- `get_scheduler_metadata`

## Supported backends

- cuda

## CUDA Capabilities

- 8.0
- 9.0a

## Benchmarks

Benchmarking script is available for this kernel. Make sure to run `kernels benchmark org-id/repo-id` (replace "org-id" and "repo-id" with actual values).

[TODO: provide benchmarks if available]

## Source code

[TODO: provide original source code and other relevant citations if available]

## Notes

[TODO: provide additional notes about this kernel if needed]
DEPLOY IN 60 SECONDS

Run flash-attn3 on Runcrate

Deploy on H100, A100, or RTX GPUs. Pay only for what you use. No setup required.