Forums - yolox dlc

3 posts / 0 new
Last post
yolox dlc
gsosun13
Join Date: 27 May 24
Posts: 3
Posted: Mon, 2024-07-15 00:52

I'm currently trying to use a DLC model converted from the YOLOX_s model, but the inference results using the DLC are abnormal. Could there be an issue in the snpe-onnx-to-dlc conversion process?

When I used the original ONNX model for inference, it completed normally. However, the results from the DLC model have class IDs fixed at 0 and 5, and the confidence values are strange, ranging from 0.024 to -0.0002 instead of between 0 and 1.

Could this be due to SNPE version issues? I would appreciate your help.

  • Up0
  • Down0
sanjjey.a.sanjjey
Join Date: 17 May 22
Posts: 67
Posted: Tue, 2024-07-16 05:33

Hi,
This will be not SNPE version issue, 
Can you check on your post processing script.
May I know which model you are using?

  • Up0
  • Down0
gsosun13
Join Date: 27 May 24
Posts: 3
Posted: Tue, 2024-07-16 17:14

HI 


I wrote the visualization code by referencing "https://github.com/Megvii-BaseDetection/YOLOX/blob/main/demo/ONNXRuntime...". This is because the ONNX output and the converted DLC output are the same. The code is as follows
 
import argparse
import os
import cv2
import numpy as np
import glob
from yolox.data.data_augment import preproc as preprocess
from yolox.data.datasets import COCO_CLASSES
from yolox.utils import mkdir, multiclass_nms, demo_postprocess, vis
 
def load_snpe_output(output_path):
    return np.fromfile(output_path, dtype=np.float32)
 
def process_single_output(snpe_output_path, image_path, output_path, input_shape=(640, 640)):
    image = cv2.imread(image_path)
    img, ratio = preprocess(image, input_shape)
 
    snpe_output = load_snpe_output(snpe_output_path)
    output = snpe_output.reshape((1, 8400, 85))
 
    predictions = demo_postprocess(output[0], input_shape)
    boxes = predictions[:, :4]
    scores = predictions[:, 4:5] * predictions[:, 5:]
 
    boxes_xyxy = np.ones_like(boxes)
    boxes_xyxy[:, 0] = boxes[:, 0] - boxes[:, 2] / 2.
    boxes_xyxy[:, 1] = boxes[:, 1] - boxes[:, 3] / 2.
    boxes_xyxy[:, 2] = boxes[:, 0] + boxes[:, 2] / 2.
    boxes_xyxy[:, 3] = boxes[:, 1] + boxes[:, 3] / 2.
    boxes_xyxy /= ratio
 
    dets = multiclass_nms(boxes_xyxy, scores, nms_thr=0.45, score_thr=0.1)
    if dets is not None:
        final_boxes, final_scores, final_cls_inds = dets[:, :4], dets[:, 4], dets[:, 5]
        image = vis(image, final_boxes, final_scores, final_cls_inds,
                    conf=0.5, class_names=COCO_CLASSES)
        cv2.imwrite(output_path, image)
 
def main(args):
    input_shape = (640, 640)
    image_path = args.image
    snpe_raw_pattern = args.snpe_raw
    output_dir = args.output_dir
 
    mkdir(output_dir)
    snpe_raw_files = glob.glob(snpe_raw_pattern)
 
    for i, snpe_raw in enumerate(snpe_raw_files):
        print(f"Processing file {i+1}/{len(snpe_raw_files)}: {snpe_raw}")
        output_path = os.path.join(output_dir, f'Result_{i}.jpg')
        process_single_output(snpe_raw, image_path, output_path, input_shape)
 
if __name__ == '__main__':
    parser = argparse.ArgumentParser(description="YOLOX inference with SNPE output")
    parser.add_argument('--image', type=str, required=True, help='Path to input image')
    parser.add_argument('--snpe_raw', type=str, required=True, help='Pattern for SNPE raw output files')
    parser.add_argument('--output_dir', type=str, required=True, help='Directory to save output images')
    args = parser.parse_args()
 
    main(args)
 

 

  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.