nguyenvulebinh/wav2vec2-base-vi-vlsp2020

automatic speech recognitiontransformersvitransformerspytorchwav2vec2automatic-speech-recognitionaudiovicc-by-nc-4.0
1.4M

Load model & processor

model_name = "nguyenvulebinh/wav2vec2-base-vi-vlsp2020" model = SourceFileLoader("model", cached_path(hf_bucket_url(model_name,filename="model_handling.py"))).load_module().Wav2Vec2ForCTC.from_pretrained(model_name) processor = Wav2Vec2ProcessorWithLM.from_pretrained(model_name)

Load an example audio (16k)

audio, sample_rate = torchaudio.load(cached_path(hf_bucket_url(model_name, filename="t2_0000006682.wav"))) input_data = processor.feature_extractor(audio[0], sampling_rate=16000, return_tensors='pt')

Infer

output = model(**input_data)

Output transcript without LM

print(processor.tokenizer.decode(output.logits.argmax(dim=-1)[0].detach().cpu().numpy()))

Output transcript with LM

print(processor.decode(output.logits.cpu().detach().numpy()[0], beam_width=100).text)


### Model Parameters License

The ASR model parameters are made available for non-commercial use only, under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. You can find details at: https://creativecommons.org/licenses/by-nc/4.0/legalcode


### Contact 

nguyenvulebinh@gmail.com

[![Follow](https://img.shields.io/twitter/follow/nguyenvulebinh?style=social)](https://twitter.com/intent/follow?screen_name=nguyenvulebinh)
DEPLOY IN 60 SECONDS

Run wav2vec2-base-vi-vlsp2020 on Runcrate

Deploy on H100, A100, or RTX GPUs. Pay only for what you use. No setup required.