简体中文 | English
If the purpose of your conversion is to deploy TensorRT, OpenVINO, ONNX Runtime, the current PaddlePaddle provides [FastDeploy] (https://github.com/PaddlePaddle/FastDeploy), which supports 150+ models to be directly deployed to these engines, Paddle2ONNX The conversion process also no longer needs to be explicitly called by the user, helping everyone to solve various tricks and alignment problems during the conversion process.
Paddle2ONNX supports converting PaddlePaddle model format to ONNX model format. The deployment of the Paddle model to a variety of inference engines can be completed through ONNX, including TensorRT/OpenVINO/MNN/TNN/NCNN, and other inference engines or hardware that support the ONNX open source format.
Thanks to EasyEdge Team for contributing Paddle2Caffe, which supports exporting the Paddle model to Caffe format. For installation and usage, please refer to Paddle2Caffe.
Paddle2ONNX has built a model zoo of paddle popular models, including PicoDet, OCR, HumanSeg and other domain models. Developers who need it can directly download and use them. Enter the directory model_zoo for more details!
pip install paddle2onnx
When Paddle2ONNX exports the model, it needs to pass in the deployment model format, including two files
model_name.pdmodel
: Indicates the model structuremodel_name.pdiparams
: Indicates model parameters
[Note] It should be noted here that the suffix of the parameter file in the two files is .pdiparams
. If the suffix of your parameter file is .pdparams
, it means that your parameters are saved during the training process and are not currently deployed. model format. The export of the deployment model can refer to the export model document of each model suite.paddle2onnx --model_dir saved_inference_model \
--model_filename model.pdmodel \
--params_filename model.pdiparams\
--save_file model.onnx \
--enable_dev_version True
Parameter | Parameter Description |
---|---|
--model_dir | Configure directory path containing Paddle models |
--model_filename |
[Optional] Configure the filename to store the network structure under --model_dir
|
--params_filename |
[Optional] Configure the name of the file to store model parameters under --model_dir
|
--save_file | Specify the converted model save directory path |
--opset_version | [Optional] Configure the OpSet version converted to ONNX, currently supports multiple versions such as 7~16, the default is 9 |
--enable_dev_version | [Optional] Whether to use the new version of Paddle2ONNX (recommended), the default is True |
--enable_onnx_checker | [Optional] Configure whether to check the correctness of the exported ONNX model, it is recommended to turn on this switch, the default is False |
--enable_auto_update_opset | [Optional] Whether to enable the opset version automatic upgrade function, when the lower version of the opset cannot be converted, automatically select the higher version of the opset for conversion, the default is True |
--deploy_backend | [Optional] Inference engine for quantitative model deployment, supports onnxruntime, tensorrt or others, when other is selected, all quantization information is stored in the max_range.txt file, the default is onnxruntime |
--save_calibration_file | [Optional] TensorRT 8.X version deploys the cache file that needs to be read to save the path of the quantitative model, the default is calibration.cache |
--version | [Optional] View paddle2onnx version |
--external_filename | [Optional] When the exported ONNX model is larger than 2G, you need to set the storage path of external data, the recommended setting is: external_data |
--export_fp16_model | [Optional] Whether to convert the exported ONNX model to FP16 format, and use ONNXRuntime-GPU to accelerate inference, the default is False |
--custom_ops | [Optional] Export Paddle OP as ONNX's Custom OP, for example: --custom_ops '{"paddle_op":"onnx_op"}, default is {} |
onnx-simplifier
, or you can use the following command to optimize the modelpython -m paddle2onnx.optimize --input_model model.onnx --output_model new_model.onnx
python -m paddle2onnx.optimize --input_model model.onnx \
--output_model new_model.onnx \
--input_shape_dict "{'x':[1,3,224,224]}"
If you need to crop the Paddle model, solidify or modify the input Shape of the Paddle model, or merge the weight files of the Paddle model, please use the following tools: Paddle-related tools
If you need to crop the ONNX model or modify the ONNX model, please refer to the following tools: ONNX related tools
For PaddleSlim quantization model export, please refer to: Quantization Model Export ONNX
VisualDL has deployed the model conversion tool on the website to provide services. You can click [Service Link] (https://www.paddlepaddle.org.cn/paddle/visualdl/modelconverter/) to perform online Paddle2ONNX model conversion.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。
1. Open source ecosystem
2. Collaboration, People, Software
3. Evaluation model