Runnable with vLLMNVIDIA Nemotron Parse v1.1 is designed to understand document semantics and extract text and tables elements with spatial grounding. Given an image, NVIDIA Nemotron Parse v1.1 produces structured annotations, including formatted text, bounding-boxes and the corresponding semantic classes, ordered according to the document's reading flow. It overcomes the shortcomings of traditional OCR technologies that struggle with complex document layouts with structural variability, and helps transform unstructured documents into actionable and machine-usable representations. This has several downstream benefits such as increasing the availability of training-data for Large Language Models (LLMs), improving the accuracy of extractor, curator, retriever and AI agentic applications, and enhancing document understanding pipelines.
This model is ready for commercial use.
[Note]: we recently released an updated Nemotron-Parse-v1.2
GOVERNING TERMS: The NIM container is governed by the NVIDIA Software License Agreement and Product-Specific Terms for NVIDIA AI Products. Use of this model is governed by the NVIDIA Open Model License Agreement. Use of the tokenizer included in this model is governed by the CC-BY-4.0 license.
Global
NVIDIA Nemotron Parse v1.1 will be capable of comprehensive text understanding and document structure understanding. It will be used in retriever and curator solutions. Its text extraction datasets and capabilities will help with LLM and VLM training, as well as improve run-time inference accuracy of VLMs. The NVIDIA Nemotron Parse v1.1 model will perform text extraction from PDF and PPT documents. The NVIDIA Nemotron Parse v1.1 can classify the objects (title, section, caption, index, footnote, lists, tables, bibliography, image) in a given document, and provide bounding boxes with coordinates.
November 17, 2025
Transformer-based vision-encoder-decoder model
Cumulative Compute: 2.2e+22
Estimated Energy and Emissions for Model Training:
Energy Consumption: 7,827.46 kWh
Carbon Emissions: 3.21 tCO2e
Runtime Engine(s): TensorRT-LLM
Supported Hardware Microarchitecture Compatibility:
NVIDIA Hopper/NVIDIA Ampere/NVIDIA Turing
Supported Operating System(s): Linux
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
V1.1. A faster version of Nemotron-Parse is available as well: Nemotron-Parse-v1.1-TC
pip install -r requirements.txt
Alternatively, you can use a public image nvcr.io/nvidia/pytorch:25.03-py3 with the following library versions installed on top:
pip install accelerate==1.12.0
pip install albumentations==2.0.8
pip install transformers==4.51.3
pip install timm==1.0.22
import torch
from PIL import Image, ImageDraw
from transformers import AutoModel, AutoProcessor, AutoTokenizer, AutoConfig, AutoImageProcessor, GenerationConfig
from postprocessing import extract_classes_bboxes, transform_bbox_to_original, postprocess_text
# Load model and processor
model_path = "nvidia/NVIDIA-Nemotron-Parse-v1.1" # Or use a local path
device = "cuda:0"
model = AutoModel.from_pretrained(
model_path,
trust_remote_code=True,
torch_dtype=torch.bfloat16
).to(device).eval()
tokenizer = AutoTokenizer.from_pretrained(model_path)
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
# Load image
image = Image.open("path/to/your/image.jpg")
task_prompt = "</s><s><predict_bbox><predict_classes><output_markdown>"
# Process image
inputs = processor(images=[image], text=task_prompt, return_tensors="pt", add_special_tokens=False).to(device)
generation_config = GenerationConfig.from_pretrained(model_path, trust_remote_code=True)
# Generate text
outputs = model.generate(**inputs, generation_config=generation_config)
# Decode the generated text
generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0]
from PIL import Image, ImageDraw
from postprocessing import extract_classes_bboxes, transform_bbox_to_original, postprocess_text
classes, bboxes, texts = extract_classes_bboxes(generated_text)
bboxes = [transform_bbox_to_original(bbox, image.width, image.height) for bbox in bboxes]
# Specify output formats for postprocessing
table_format = 'latex' # latex | HTML | markdown
text_format = 'markdown' # markdown | plain
blank_text_in_figures = False # remove text inside 'Picture' class
texts = [postprocess_text(text, cls = cls, table_format=table_format, text_format=text_format, blank_text_in_figures=blank_text_in_figures) for text, cls in zip(texts, classes)]
for cl, bb, txt in zip(classes, bboxes, texts):
print(cl, ': ', txt)
draw = ImageDraw.Draw(image)
for bbox in bboxes:
draw.rectangle((bbox[0], bbox[1], bbox[2], bbox[3]), outline="red")
Update: Nemotron-Parse-v1.1 is now available in vllm main and can be found in vllm/vllm-openai:v0.14.1 docker image.
Note: when running on A100/A10 we recommend running vllm serve with --attention-backend=TRITON_ATTN
You will need to install albumentations on top, and then follow the VLLM Inference example below:
pip install albumentations timm open_clip_torch
from vllm import LLM, SamplingParams
from PIL import Image
def main():
sampling_params = SamplingParams(
temperature=0,
top_k=1,
repetition_penalty=1.1,
max_tokens=9000,
skip_special_tokens=False,
)
llm = LLM(
model="nvidia/NVIDIA-Nemotron-Parse-v1.1",
max_num_seqs=64,
limit_mm_per_prompt={"image": 1},
dtype="bfloat16",
trust_remote_code=True,
)
image = Image.open("<YOUR-IMAGE-PATH>")
prompts = [
{ # Implicit prompt
"prompt": "</s><s><predict_bbox><predict_classes><output_markdown>",
"multi_modal_data": {
"image": image
},
},
{ # Explicit encoder/decoder prompt
"encoder_prompt": {
"prompt": "",
"multi_modal_data": {
"image": image
},
},
"decoder_prompt": "</s><s><predict_bbox><predict_classes><output_markdown>",
},
]
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Decoder prompt: {prompt!r}, Generated text: {generated_text!r}")
if __name__ == "__main__":
main()
Alternatively, you can start a vllm server as:
vllm serve nvidia/NVIDIA-Nemotron-Parse-v1.1 \
--dtype bfloat16 \
--max-num-seqs 8 \
--limit-mm-per-prompt '{"image": 1}' \
--trust-remote-code \
--port 8000 \
--chat-template chat_template.jinja
with chat_template.jinja provided in this repository. Then, you can run inference as:
import base64
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1",
)
# Read and base64-encode the image
with open(<your-image-path>, "rb") as f:
img_b64 = base64.b64encode(f.read()).decode("utf-8")
prompt_text = "</s><s><predict_bbox><predict_classes><output_markdown>"
resp = client.chat.completions.create(
model="nvidia/NVIDIA-Nemotron-Parse-v1.1",
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": prompt_text,
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/png;base64,{img_b64}",
},
},
],
}
],
max_tokens=9000,
temperature=0.0,
extra_body={
"repetition_penalty": 1.1,
"top_k": 1,
"skip_special_tokens": False,
},
)
print(resp.choices[0].message.content)
Note: we recommend using the default prompt that extracts bounding boxes, classes, and text in markdown formatting for all use cases (</s><s><predict_bbox><predict_classes><output_markdown>). If necessary, optionally the prompt that omits text extraction and only outputs bounding boxes and classes could be used: </s><s><predict_bbox><predict_classes><output_no_text>.
Nemotron-Parse-v1.1 is also available as an optimized NIM container.
[Deprecated] An alternative way is to run vllm is with our fork (based on v0 vllm) according to installation instructions below, and then following the VLLM inference examples above:
uv venv --python 3.12 --seed
source .venv/bin/activate
uv pip install "git+https://github.com/amalad/vllm.git@nemotron_parse"
uv pip install timm albumentations
Please refer to the postprocessing example above to convert vLLM predictions to the desired format, and convert the predicted bounding boxes back to the image coordinate space.
Nemotron-Parse-v1.1 is capable of extracting text elements and their bounding boxes, along with a semantic class association.

Nemotron-Parse-v1.1 extracts complex tables in LaTeX format, including for multirow and multicolumn formatting.

Extraction of text styles and mathematical equations is supported via a combination of markdown and LaTeX formatting.

NVIDIA Nemotron Parse 1.1 is first pre-trained on our internal datasets: human, synthetic and automated.
Data Modality:
*Text
*Image
Data Collection Method by Dataset: Hybrid: Human, Synthetic, Automated
Labeling Method by Dataset: Hybrid: Human, Synthetic, Automated
NVIDIA Nemotron Parse 1.1 is evaluated on multiple datasets for robustness, including public and internal dataset. Data Collection Method by Dataset: Hybrid: Human, Synthetic, Automated Labeling Method by Dataset: Hybrid: Human, Synthetic, Automated
Runtime Engine(s): TensorRT-LLM
Test Hardware: NVIDIA H100# Synchronization
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns here.
You are responsible for ensuring that your use of NVIDIA AI Models complies with all applicable laws.
Get access to knowledge base articles and support cases or submit a ticket.
@misc{chumachenko2025nvidianemotronparse11,
title={NVIDIA Nemotron Parse 1.1},
author={NVIDIA},
year={2025},
eprint={2511.20478},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2511.20478},
}