2 Star 14 Fork 5

PaddlePaddle/Paddle2ONNX

Create your Gitee Account
Explore and code with more than 12 million developers,Free private repositories !:)
Sign up
Clone or Download
contribute
Sync branch
Cancel
Notice: Creating folder will generate an empty file .keep, because not support in Git
Loading...
README
Apache-2.0

Paddle2ONNX

简体中文 | English

🆕 New open source project FastDeploy

If the purpose of your conversion is to deploy TensorRT, OpenVINO, ONNX Runtime, the current PaddlePaddle provides [FastDeploy] (https://github.com/PaddlePaddle/FastDeploy), which supports 150+ models to be directly deployed to these engines, Paddle2ONNX The conversion process also no longer needs to be explicitly called by the user, helping everyone to solve various tricks and alignment problems during the conversion process.

Introduction

Paddle2ONNX supports converting PaddlePaddle model format to ONNX model format. The deployment of the Paddle model to a variety of inference engines can be completed through ONNX, including TensorRT/OpenVINO/MNN/TNN/NCNN, and other inference engines or hardware that support the ONNX open source format.

Thanks to EasyEdge Team for contributing Paddle2Caffe, which supports exporting the Paddle model to Caffe format. For installation and usage, please refer to Paddle2Caffe.

Model Zoo

Paddle2ONNX has built a model zoo of paddle popular models, including PicoDet, OCR, HumanSeg and other domain models. Developers who need it can directly download and use them. Enter the directory model_zoo for more details!

Environment dependencies

  • none

Install

pip install paddle2onnx

use

Get the PaddlePaddle deployment model

When Paddle2ONNX exports the model, it needs to pass in the deployment model format, including two files

  • model_name.pdmodel: Indicates the model structure
  • model_name.pdiparams: Indicates model parameters [Note] It should be noted here that the suffix of the parameter file in the two files is .pdiparams. If the suffix of your parameter file is .pdparams, it means that your parameters are saved during the training process and are not currently deployed. model format. The export of the deployment model can refer to the export model document of each model suite.

Command line conversion

paddle2onnx --model_dir saved_inference_model \
             --model_filename model.pdmodel \
             --params_filename model.pdiparams\
             --save_file model.onnx \
             --enable_dev_version True

Parameter options

Parameter Parameter Description
--model_dir Configure directory path containing Paddle models
--model_filename [Optional] Configure the filename to store the network structure under --model_dir
--params_filename [Optional] Configure the name of the file to store model parameters under --model_dir
--save_file Specify the converted model save directory path
--opset_version [Optional] Configure the OpSet version converted to ONNX, currently supports multiple versions such as 7~16, the default is 9
--enable_dev_version [Optional] Whether to use the new version of Paddle2ONNX (recommended), the default is True
--enable_onnx_checker [Optional] Configure whether to check the correctness of the exported ONNX model, it is recommended to turn on this switch, the default is False
--enable_auto_update_opset [Optional] Whether to enable the opset version automatic upgrade function, when the lower version of the opset cannot be converted, automatically select the higher version of the opset for conversion, the default is True
--deploy_backend [Optional] Inference engine for quantitative model deployment, supports onnxruntime, tensorrt or others, when other is selected, all quantization information is stored in the max_range.txt file, the default is onnxruntime
--save_calibration_file [Optional] TensorRT 8.X version deploys the cache file that needs to be read to save the path of the quantitative model, the default is calibration.cache
--version [Optional] View paddle2onnx version
--external_filename [Optional] When the exported ONNX model is larger than 2G, you need to set the storage path of external data, the recommended setting is: external_data
--export_fp16_model [Optional] Whether to convert the exported ONNX model to FP16 format, and use ONNXRuntime-GPU to accelerate inference, the default is False
--custom_ops [Optional] Export Paddle OP as ONNX's Custom OP, for example: --custom_ops '{"paddle_op":"onnx_op"}, default is {}
  • Use ONNXRuntime to validate converted models, please pay attention to install the latest version (minimum requirement 1.10.0)

Other optimization tools

  1. If you need to optimize the exported ONNX model, it is recommended to use onnx-simplifier, or you can use the following command to optimize the model
python -m paddle2onnx.optimize --input_model model.onnx --output_model new_model.onnx
  1. If you need to modify the input shape of the model exported to ONNX, such as changing to a static shape
python -m paddle2onnx.optimize --input_model model.onnx \
                                --output_model new_model.onnx \
                                --input_shape_dict "{'x':[1,3,224,224]}"
  1. If you need to crop the Paddle model, solidify or modify the input Shape of the Paddle model, or merge the weight files of the Paddle model, please use the following tools: Paddle-related tools

  2. If you need to crop the ONNX model or modify the ONNX model, please refer to the following tools: ONNX related tools

  3. For PaddleSlim quantization model export, please refer to: Quantization Model Export ONNX

Paddle2ONNX with VisualDL service

VisualDL has deployed the model conversion tool on the website to provide services. You can click [Service Link] (https://www.paddlepaddle.org.cn/paddle/visualdl/modelconverter/) to perform online Paddle2ONNX model conversion.

Paddle2ONNX

Documents

License

Apache-2.0 license.

from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import argparse import io, re import sys, os import subprocess import platform COPYRIGHT = ''' Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' LANG_COMMENT_MARK = None NEW_LINE_MARK = None COPYRIGHT_HEADER = None if platform.system() == "Windows": NEW_LINE_MARK = "\r\n" else: NEW_LINE_MARK = '\n' COPYRIGHT_HEADER = COPYRIGHT.split(NEW_LINE_MARK)[1] p = re.search('(\d{4})', COPYRIGHT_HEADER).group(0) process = subprocess.Popen(["date", "+%Y"], stdout=subprocess.PIPE) date, err = process.communicate() date = date.decode("utf-8").rstrip("\n") COPYRIGHT_HEADER = COPYRIGHT_HEADER.replace(p, date) def generate_copyright(template, lang='C'): if lang == 'Python': LANG_COMMENT_MARK = '#' else: LANG_COMMENT_MARK = "//" lines = template.split(NEW_LINE_MARK) BLANK = " " ans = LANG_COMMENT_MARK + BLANK + COPYRIGHT_HEADER + NEW_LINE_MARK for lino, line in enumerate(lines): if lino == 0 or lino == 1 or lino == len(lines) - 1: continue if len(line) == 0: BLANK = "" else: BLANK = " " ans += LANG_COMMENT_MARK + BLANK + line + NEW_LINE_MARK return ans + "\n" def lang_type(filename): if filename.endswith(".py"): return "Python" elif filename.endswith(".h"): return "C" elif filename.endswith(".c"): return "C" elif filename.endswith(".hpp"): return "C" elif filename.endswith(".cc"): return "C" elif filename.endswith(".cpp"): return "C" elif filename.endswith(".cu"): return "C" elif filename.endswith(".cuh"): return "C" elif filename.endswith(".go"): return "C" elif filename.endswith(".proto"): return "C" else: print("Unsupported filetype %s", filename) exit(0) PYTHON_ENCODE = re.compile("^[ \t\v]*#.*?coding[:=][ \t]*([-_.a-zA-Z0-9]+)") def main(argv=None): parser = argparse.ArgumentParser( description='Checker for copyright declaration.') parser.add_argument('filenames', nargs='*', help='Filenames to check') args = parser.parse_args(argv) retv = 0 for filename in args.filenames: fd = io.open(filename, encoding="utf-8") first_line = fd.readline() second_line = fd.readline() if "COPYRIGHT (C)" in first_line.upper(): continue if first_line.startswith("#!") or PYTHON_ENCODE.match( second_line) != None or PYTHON_ENCODE.match(first_line) != None: continue original_contents = io.open(filename, encoding="utf-8").read() new_contents = generate_copyright( COPYRIGHT, lang_type(filename)) + original_contents print('Auto Insert Copyright Header {}'.format(filename)) retv = 1 with io.open(filename, 'w') as output_file: output_file.write(new_contents) return retv if __name__ == '__main__': exit(main())

About

paddle2onnx支持将PaddlePaddle模型格式转化到ONNX模型格式 expand collapse
Python and 5 more languages
Apache-2.0
Cancel

Releases

No release

Contributors

All

Activities

Load More
can not load any more
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/paddlepaddle/Paddle2ONNX.git
git@gitee.com:paddlepaddle/Paddle2ONNX.git
paddlepaddle
Paddle2ONNX
Paddle2ONNX
develop

Search