18video性欧美19sex,欧美高清videosddfsexhd,性少妇videosexfreexxx片中国,激情五月激情综合五月看花,亚洲人成网77777色在线播放

0
  • 聊天消息
  • 系統(tǒng)消息
  • 評論與回復
登錄后你可以
  • 下載海量資料
  • 學習在線課程
  • 觀看技術視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會員中心
創(chuàng)作中心

完善資料讓更多小伙伴認識你,還能領取20積分哦,立即完善>

3天內不再提示

英特爾的開發(fā)板評測

英特爾物聯(lián)網 ? 來源:英特爾物聯(lián)網 ? 2025-01-24 09:37 ? 次閱讀
加入交流群
微信小助手二維碼

掃碼添加小助手

加入工程師交流群

作者:隋曉金

收到英特爾的開發(fā)板-小挪吒,正好手中也有oak相機,反正都是 OpenVINO一套玩意,進行評測一下,竟然默認是個Windows系統(tǒng),刷機成Linux系統(tǒng)比較方便。

bcd806e4-d969-11ef-9310-92fbcf53809c.jpg

bcfdc334-d969-11ef-9310-92fbcf53809c.png

bd145590-d969-11ef-9310-92fbcf53809c.jpg

bd376d0a-d969-11ef-9310-92fbcf53809c.jpg

bd57ae26-d969-11ef-9310-92fbcf53809c.jpg

我們先刷個刷成Linux系統(tǒng),測試比較方便,雖然Windows+Python代碼也可以開發(fā),搞點難度的Ubuntu+&++推理,同時還為了測試灰仔的ncnn,勉為其難,把系統(tǒng)刷掉,系統(tǒng)我們選擇英特爾適配的22.04即可,確保和CPU的型號相同即可:

bd71af24-d969-11ef-9310-92fbcf53809c.png

使用motrix的下載,速度較快。然后使用rufus進行刻錄優(yōu)盤進行sd卡刻入,系統(tǒng)變成linux,就可以遠程設置一ssh;系統(tǒng)界面如上。

系統(tǒng)需要安裝官方的OpenVINO組件,使用英特爾端進行OpenVINO模型推理,當然也可使用ncnn/mnn/onnx,但原聲組件更友好一些。

bd87494c-d969-11ef-9310-92fbcf53809c.png

先配置oak的環(huán)境,適配深度相機推理和測距,然后在開發(fā)板上推理關鍵點檢測推理,演繹一下測試開發(fā)版性能,正好相機端的芯片也是英特爾使用OpenVINO框架,下面操作是開發(fā)板上配置一下相機使用的庫環(huán)境:

ubuntu@ubuntu:~$ wget https://gitee.com/oakchina/depthai-core/releases/download/v2.28.0/depthai_2.28.0_amd64.deb
ubuntu@ubuntu:~$ sudo apt install -f
ubuntu@ubuntu:~$ sudo dpkg -i depthai_2.28.0_amd64.deb
(Reading database ... 164136 files and directories currently installed.)
Preparing to unpack depthai_2.28.0_amd64.deb ...
Unpacking depthai (2.28.0) over (2.28.0) ...
Setting up depthai (2.28.0) ...;

配置一下OpenVINO ,參考手冊。這個主要后面寫代碼和轉模型用。但是我用C++寫代碼,搞點有難度的事情。

https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html?PACKAGE=OPENVINO_BASE&VERSION=v_2022_3_2&ENVIRONMENT=DEV_TOOLS&OP_SYSTEM=LINUX&DISTRIBUTION=PIP;

鏈接,下面操作仍然在開發(fā)板上執(zhí)行:

pip install openvino-dev==2022.3.2
storage.openvinotoolkit.org


ubuntu@ubuntu:~$ wget https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.3/linux/l_openvino_toolkit_ubuntu22_2023.3.0.13775.ceeafaf64f3_x86_64.tgz
ubuntu@ubuntu:~$ sudo tar xf l_openvino_toolkit_ubuntu22_2023.0.0.10926.b4452d56304_x86_64.tgz.tgz.sha256.tgz -C /opt/intel/
ubuntu@ubuntu:~$ tar -zxvf l_openvino_toolkit_ubuntu22_2023.3.0.13775.ceeafaf64f3_x86_64.tgz
ubuntu@ubuntu:~$ mv l_openvino_toolkit_ubuntu22_2023.3.0.13775.ceeafaf64f3_x86_64 openvino_2023
ubuntu@ubuntu:~$ mv openvino_2023/ /opt/intel/
ubuntu@ubuntu:~$ cd /opt/intel/
ubuntu@ubuntu:~$ cd openvino_2023/
ubuntu@ubuntu:/opt/intel/openvino_2023$ vim ~/.bashrc
source /opt/intel/openvino_2023/setupvars.sh
ubuntu@ubuntu:~$ cd /opt/intel/openvino_2023/install_dependencies/
ubuntu@ubuntu:/opt/intel/openvino_2023/install_dependencies$ sudo -E ./install_openvino_dependencies.sh

下面操作在自己的宿主機器上執(zhí)行,主要發(fā)現在開發(fā)板上的OpenVINO無法轉相機的blob模型,但是這個低版本的OpenVINO庫又無法開發(fā)板,應為2021.4支持系統(tǒng)ubuntu20.04版本和一下,開發(fā)板的版本是22.04系統(tǒng)版本過高。

先搞一下yolov5lite,這個官方給了方法和例子,簡要敘述和附上,這我是在自己的宿主主機上做的ubuntu20.04 因為現在開發(fā)板版本過高,擔心它的OpenVINO環(huán)境轉的blob不一定能在oak相機上運行。

ubuntu@ubuntu:~/Downloads$ axel -n 100 https://registrationcenter-download.intel.com/akdlm/IRC_NAS/18096/l_openvino_toolkit_p_2021.4.689.tgz
ubuntu@ubuntu:~$ tar -zxvf l_openvino_toolkit_p_2021.4.689.tgz 
ubuntu@ubuntu:~/Downloads$ cd l_openvino_toolkit_p_2021.4.689/
ubuntu@ubuntu:~/Downloads/l_openvino_toolkit_p_2021.4.689$ sudo ./install_GUI.sh 
ubuntu@ubuntu:~$ cd /opt/intel/openvino_2021/install_dependencies/
ubuntu@ubuntu:/opt/intel/openvino_2021/install_dependencies$ sudo -E ./install_openvino_dependencies.sh 
ubuntu@ubuntu:/opt/intel/openvino_2021/bin$ sudo vim ~/.bashrc 

在末尾添加:

source /opt/intel/openvino_2021/bin/setupvars.sh
ubuntu@ubuntu:/opt/intel/openvino_2021/bin$ source ~/.bashrc 
[setupvars.sh] OpenVINO environment initialized
ubuntu@ubuntu:/opt/intel/openvino_2021/bin$ cd /opt//intel/openvino_2021/deployment_tools/model_optimizer//install_prerequisites/
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/model_optimizer/install_prerequisites$ sudo ./install_prerequisites.sh

下載模型,進行轉模型:

ubuntu@ubuntu:~$ git clone https://github.com/ppogg/YOLOv5-Lite

模型代碼,參考oak官方代碼:

bd996a3c-d969-11ef-9310-92fbcf53809c.png

轉onnx模型和轉OpenVINO模型 export_onnx.py見官方參考:

ubuntu@ubuntu:~/YOLOv5-Lite$ pip3 install -r requirements.txt
ubuntu@ubuntu:~/YOLOv5-Lite$ python3 export_onnx.py -w v5lite-e.pt -imgsz 640
Namespace(blob=False, convert_tool='blobconverter', img_size=[640, 640], 
input_model=PosixPath('/home/ubuntu/YOLOv5-Lite/v5lite-e.pt'), name='v5lite-e', 
opset=12, output_dir=PosixPath('/home/ubuntu/YOLOv5-Lite'), shaves=6, 
spatial_detection=False)
[18:12:38] INFO   YOLOv5  v1.5-16-g9d649a6 torch 2.4.1+cu121 CPU      
                                        
Fusing layers... 
[18:12:41] INFO   Model Summary: 167 layers, 781205 parameters, 0 gradients, 
          2.9 GFLOPS                         
 
      INFO   Starting ONNX export with onnx 1.16.1...          
      INFO   Starting to simplify ONNX...                
      INFO   ONNX export success, saved as:               
              /home/ubuntu/YOLOv5-Lite/v5lite-e.onnx       
      INFO   anchors:                          
              [10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0,  
          62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0, 198.0, 373.0, 
          326.0]                           
      INFO   anchor_masks:                        
              {'side80': [0, 1, 2], 'side40': [3, 4, 5], 'side20':
          [6, 7, 8]}                         
      INFO   Anchors data export success, saved as:           
              /home/ubuntu/YOLOv5-Lite/v5lite-e.json       
      INFO   Export complete (3.61s).                  
ubuntu@ubuntu:~/YOLOv5-Lite$ python3 /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py --input_model v5lite-e.onnx --output_dir /home/ubuntu/YOLOv5-Lite/saved/FP16 --input_shape [1,3,640,640] --data_type FP16 --scale_values [255.0,255.0,255.0] --mean_values [0,0,0]
Model Optimizer arguments:
Common parameters:
  - Path to the Input Model:   /home/ubuntu/YOLOv5-Lite/v5lite-e.onnx
  - Path for generated IR:   /home/ubuntu/YOLOv5-Lite/saved/FP16
  - IR output name:   v5lite-e
  - Log level:   ERROR
  - Batch:   Not specified, inherited from the model
  - Input layers:   Not specified, inherited from the model
  - Output layers:   Not specified, inherited from the model
  - Input shapes:   [1,3,640,640]
  - Mean values:   [0,0,0]
  - Scale values:   [255.0,255.0,255.0]
  - Scale factor:   Not specified
  - Precision of IR:   FP16
  - Enable fusing:   True
  - Enable grouped convolutions fusing:   True
  - Move mean values to preprocess section:   None
  - Reverse input channels:   False
ONNX specific parameters:
  - Inference Engine found in:   /opt/intel/openvino_2021/python/python3.8/openvino
Inference Engine version:   2021.4.1-3926-14e67d86634-releases/2021/4
Model Optimizer version:   2021.4.1-3926-14e67d86634-releases/2021/4
[ WARNING ] 
Detected not satisfied dependencies:
  networkx: installed: 3.1, required: ~= 2.5
  numpy: installed: 1.23.5, required: < 1.20
 
Please install required versions of components or use install_prerequisites script
/opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_onnx.sh
Note that install_prerequisites scripts may install additional components.
/opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/onnx/parameter_ext.py:20: DeprecationWarning: `mapping.TENSOR_TYPE_TO_NP_TYPE` is now deprecated and will be removed in a future release.To silence this warning, please use `helper.tensor_dtype_to_np_dtype` instead.
 ?'data_type': TENSOR_TYPE_TO_NP_TYPE[t_type.elem_type]
/opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/analysis/boolean_input.py:13: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
 ?nodes = graph.get_op_nodes(op='Parameter', data_type=np.bool)
/opt/intel/openvino_2021/deployment_tools/model_optimizer/mo/front/common/partial_infer/concat.py:36: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
 ?mask = np.zeros_like(shape, dtype=np.bool)
[ WARNING ] ?Const node '/model.8/Resize/Add_input_port_1/value338417277' returns shape values of 'float64' type but it must be integer or float32. During Elementwise type inference will attempt to cast to float32
[ WARNING ] ?Const node '/model.12/Resize/Add_input_port_1/value341817280' returns shape values of 'float64' type but it must be integer or float32. During Elementwise type inference will attempt to cast to float32
[ WARNING ] ?Changing Const node '/model.8/Resize/Add_input_port_1/value338418006' data type from float16 to  for Elementwise operation
[ WARNING ] Changing Const node '/model.12/Resize/Add_input_port_1/value341817580' data type from float16 to  for Elementwise operation
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.xml
[ SUCCESS ] BIN file: /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.bin
[ SUCCESS ] Total execution time: 10.69 seconds. 
[ SUCCESS ] Memory consumed: 104 MB. 
It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2021_bu_IOTG_OpenVINO-2021-4-LTS&content=upg_all&medium=organic or on the GitHub*
ubuntu@ubuntu:~/YOLOv5-Lite$ 

轉換模型

ubuntu@ubuntu:~$ find . -name "mo_onnx.py"
./.local/lib/python3.10/site-packages/openvino/tools/mo/mo_onnx.py
ubuntu@ubuntu:~$ python3 ./.local/lib/python3.10/site-packages/openvino/tools/mo/mo_onnx.py --input_model v5lite-e.onnx --output_dir /home/ubuntu/YOLOv5-Lite/saved/FP16 --input_shape [1,3,640,640] --data_type FP16 --scale_values [255.0,255.0,255.0] --mean_values [0,0,0]
[ WARNING ] Use of deprecated cli option --data_type detected. Option use in the following releases will be fatal.
Check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2023_bu_IOTG_OpenVINO-2022-3&content=upg_all&medium=organic or on https://github.com/openvinotoolkit/openvino
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.xml
[ SUCCESS ] BIN file: /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.bin
ubuntu@ubuntu:~$ pip3 install blobconverter
然后站blob


[setupvars.sh] OpenVINO environment initialized
ubuntu@ubuntu:~/YOLOv5-Lite$ cd /opt/intel/openvino_2021/deployment_tools/tools
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/tools$ sudo chmod 777 compile_tool/
[sudo] password for ubuntu: 
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/tools$ cd compile_tool/
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/tools/compile_tool$ ./compile_tool -m /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.xml -ip U8 -d MYRIAD -VPU_NUMBER_OF_SHAVES 4 -VPU_NUMBER_OF_CMX_SLICES 4
Inference Engine: 
  IE version ......... 2021.4.1
  Build ........... 2021.4.1-3926-14e67d86634-releases/2021/4
 
Network inputs:
  images : U8 / NCHW
Network outputs:
  output1_yolov5 : FP16 / NCHW
  output2_yolov5 : FP16 / NCHW
  output3_yolov5 : FP16 / NCHW
[Warning][VPU][Config] Deprecated option was used : VPU_MYRIAD_PLATFORM
Done. LoadNetwork time elapsed: 6529 ms
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/tools/compile_tool$ ls
compile_tool README.md v5lite-e.blob

導出模型,先在oak相機上試試,這個整個模型都是在oak相機端進行推理和測距,只能說這個開發(fā)板是支持oak這種深度相機使用的。

bdba9ffe-d969-11ef-9310-92fbcf53809c.jpg

接著,來修改我們的代碼,將模型放在開發(fā)板上使用OpenVINO推理,將測距功能仍然保持相機端推理,下面是使用clion遠程調用開發(fā)板進行編譯的代碼,將深度相機OAK插在哪吒開發(fā)板的usb接口,將英特爾開發(fā)板插上顯示器,然后進行相機調用,后續(xù)上傳GitHub。

cmakelists.txt

cmake_minimum_required(VERSION 3.16)
project(demo)
set(CMAKE_CXX_STANDARD 11)
find_package(OpenCV REQUIRED)
#message(STATUS ${OpenCV_INCLUDE_DIRS})
#添加頭文件
include_directories(${OpenCV_INCLUDE_DIRS})
include_directories(${CMAKE_SOURCE_DIR}/include)
include_directories(${CMAKE_SOURCE_DIR}/include/utility)
#鏈接Opencv庫
find_package(depthai CONFIG REQUIRED)
add_executable(demo main.cpp include/utility/utility.cpp)
target_link_libraries(demo ${OpenCV_LIBS} depthai::opencv )
 

main.cpp

#include 
// Includes common necessary includes for development using depthai library
#include "depthai/depthai.hpp"
 
/*
The code is the same as for Tiny-yolo-V3, the only difference is the blob file.
The blob was compiled following this tutorial: https://github.com/TNTWEN/OpenVINO-YOLOV4
*/
 
 
static const std::vector labelMap = {
        "person",        "bicycle",      "car",           "motorbike",     "aeroplane",   "bus",         "train",       "truck",        "boat",
        "traffic light", "fire hydrant", "stop sign",     "parking meter", "bench",       "bird",        "cat",         "dog",          "horse",
        "sheep",         "cow",          "elephant",      "bear",          "zebra",       "giraffe",     "backpack",    "umbrella",     "handbag",
        "tie",           "suitcase",     "frisbee",       "skis",          "snowboard",   "sports ball", "kite",        "baseball bat", "baseball glove",
        "skateboard",    "surfboard",    "tennis racket", "bottle",        "wine glass",  "cup",         "fork",        "knife",        "spoon",
        "bowl",          "banana",       "apple",         "sandwich",      "orange",      "broccoli",    "carrot",      "hot dog",      "pizza",
        "donut",         "cake",         "chair",         "sofa",          "pottedplant", "bed",         "diningtable", "toilet",       "tvmonitor",
        "laptop",        "mouse",        "remote",        "keyboard",      "cell phone",  "microwave",   "oven",        "toaster",      "sink",
        "refrigerator",  "book",         "clock",         "vase",          "scissors",    "teddy bear",  "hair drier",  "toothbrush"};
 
static std::atomic syncNN{true};
 
 
int main() {
    // Create pipeline
    dai::Pipeline pipeline;
 
    // Define sources
    auto camRgb = pipeline.create();
    auto monoLeft = pipeline.create();
    auto monoRight = pipeline.create();
    auto stereo = pipeline.create();
    auto spatialDataCalculator = pipeline.create();
 
 
    // Properties
    camRgb->setPreviewSize(640, 640);
    camRgb->setBoardSocket(dai::RGB);
    camRgb->setResolution(dai::THE_1080_P);
    camRgb->setInterleaved(false);
    camRgb->setColorOrder(dai::RGB);
    camRgb->setPreviewKeepAspectRatio(false); //將調整視頻大小以適應預覽大小,對齊
 
    monoLeft->setBoardSocket(dai::LEFT);
    monoLeft->setResolution(dai::THE_720_P);
    monoRight->setBoardSocket(dai::RIGHT);
    monoRight->setResolution(dai::THE_720_P);
 
 
    stereo->setDefaultProfilePreset(dai::HIGH_ACCURACY);
    stereo->setLeftRightCheck(true);
    stereo->setDepthAlign(dai::RGB);
    stereo->setExtendedDisparity(true);
 
    dai::Point2f topLeft(0.4f, 0.4f);
    dai::Point2f bottomRight(0.6f, 0.6f);
 
    dai::SpatialLocationCalculatorConfigData config;
    config.depthThresholds.lowerThreshold = 100;
    config.depthThresholds.upperThreshold = 10000;
    config.roi = dai::Rect(topLeft, bottomRight);
 
    spatialDataCalculator->initialConfig.addROI(config);
    spatialDataCalculator->inputConfig.setWaitForMessage(false);
 
 
    // Network specific settings
    auto detectionNetwork = pipeline.create();
    detectionNetwork->setBlob("../v5lite-e.blob");
    detectionNetwork->setConfidenceThreshold(0.5);
    //Yolo specific parameters
    detectionNetwork->setNumClasses(80);
    detectionNetwork->setCoordinateSize(4);
    detectionNetwork->setAnchors({10,13,16,30,33,23,30,61,62,45,59,119,116,90,156,198,373,326});
    detectionNetwork->setAnchorMasks({{{"side80",{0, 1, 2}},{"side40",{3, 4, 5}},{"side20",{6, 7, 8}}}});
    detectionNetwork->setIouThreshold(0.5);
 
    // rgb輸出
    auto xoutRgb = pipeline.create();
    xoutRgb->setStreamName("rgb");
 
    // depth輸出
    auto xoutDepth = pipeline.create();
    xoutDepth->setStreamName("depth");
 
    // 測距模塊數據輸出
    auto xoutSpatialData = pipeline.create();
    xoutSpatialData->setStreamName("spatialData");
 
    // 測距模塊配置輸入
    auto xinSpatialCalcConfig = pipeline.create();
    xinSpatialCalcConfig->setStreamName("spatialCalcConfig");
 
 
    // Linking  preview 畫布 video 實時分辨率
    camRgb->video.link(xoutRgb->input); //顯示用video
    camRgb->preview.link(detectionNetwork->input); //推理用preview
    monoLeft->out.link(stereo->left);
    monoRight->out.link(stereo->right);
 
    spatialDataCalculator->passthroughDepth.link(xoutDepth->input);
    stereo->depth.link(spatialDataCalculator->inputDepth);
 
    spatialDataCalculator->out.link(xoutSpatialData->input);
    xinSpatialCalcConfig->out.link(spatialDataCalculator->inputConfig);
 
 
    // output
    auto xlinkParseOut = pipeline.create();
    xlinkParseOut->setStreamName("parseOut");
 
    auto xlinkoutOut = pipeline.create();
    xlinkoutOut->setStreamName("out");
 
    auto xlinkPassthroughOut = pipeline.create();
    xlinkPassthroughOut->setStreamName("passthrough");
 
 
    detectionNetwork->out.link(xlinkParseOut->input);
    detectionNetwork->passthrough.link(xlinkPassthroughOut->input);
 
 
    // Connect to device and start pipeline
    dai::Device device;
 
    device.setIrLaserDotProjectorBrightness(1000);
    device.setIrFloodLightBrightness(0);
    device.startPipeline(pipeline);
 
    // Output queues will be used to get the rgb frames and nn data from the outputs defined above
    auto detectQueue = device.getOutputQueue("parseOut",8,false);
    auto passthQueue = device.getOutputQueue("passthrough", 8, false);
    auto depthQueue = device.getOutputQueue("depth", 8, false);
    auto spatialCalcQueue = device.getOutputQueue("spatialData", 8, false);
    auto spatialCalcConfigInQueue = device.getInputQueue("spatialCalcConfig", 8, false);
    auto rgbQueue = device.getOutputQueue("rgb", 8, false);
 
    bool printOutputLayersOnce = true;
    auto color = cv::Scalar(0,255,0);
 
 
    std::vector detections;
    auto startTime = std::now();
    int counter = 0;
    float fps = 0;
    auto color2 = cv::Scalar(255, 255, 255);
    cv::Scalar color1 = cv::Scalar(0, 0, 255);
 
    while (true) {
        counter++;
        auto currentTime = std::now();
        auto elapsed = std::duration_cast>(currentTime - startTime);
        if(elapsed > std::seconds(1)) {
            fps = counter / elapsed.count();
            counter = 0;
            startTime = currentTime;
        }
 
        std::shared_ptr inRgb = rgbQueue->get();
        std::shared_ptr inDepth = depthQueue->get();
        std::shared_ptr inDet = detectQueue->get();
        std::shared_ptr ImgFrame = passthQueue->get();
 
        cv::Mat frame = inRgb->getCvFrame();
        cv::Mat src = ImgFrame->getCvFrame();
 
        cv::Mat depthFrameColor;
        cv::Mat depthFrame = inDepth->getFrame();
        cv::normalize(depthFrame, depthFrameColor, 255, 0, cv::NORM_INF, CV_8UC1);
        cv::equalizeHist(depthFrameColor, depthFrameColor);
        cv::applyColorMap(depthFrameColor, depthFrameColor, cv::COLORMAP_HOT);
 
        inDet = detectQueue->get();
        if(inDet) {
            detections = inDet->detections;
            for(auto& detection : detections) {
                int x1 = detection.xmin * src.cols;
                int y1 = detection.ymin * src.rows;
                int x2 = detection.xmax * src.cols;
                int y2 = detection.ymax * src.rows;
 
                uint32_t labelIndex = detection.label;
                std::string labelStr = std::to_string(labelIndex);
                if(labelIndex < labelMap.size()) {
                    labelStr = labelMap[labelIndex];
                }
                cv::putText(src, labelStr, cv::Point(x1 + 10, y1 + 20), cv::FONT_HERSHEY_TRIPLEX, 0.5, 255);
                std::stringstream confStr;
                confStr << std::fixed << std::setprecision(2) << detection.confidence * 100;
                cv::putText(src, confStr.str(), cv::Point(x1 + 10, y1 + 40), cv::FONT_HERSHEY_TRIPLEX, 0.5, 255);
                cv::rectangle(src, cv::Point(x1, y1), cv::Point(x2, y2)), color, cv::FONT_HERSHEY_SIMPLEX);
 
                // 1920*1080
                //cv::rectangle(depthFrameColor, cv::Point(x1, y1), cv::Point(x2, y2)), color, cv::FONT_HERSHEY_SIMPLEX);
                int top_left_x = detection.xmin * frame.cols;
                int top_left_y = detection.ymin * frame.rows;
                int bottom_right_x = detection.xmax * frame.cols;
                int bottom_right_y = detection.ymax * frame.rows;
 
                // 最值限定
                top_left_x = top_left_x < 0 ? 0 : top_left_x;
                bottom_right_x = bottom_right_x > frame.cols - 1 ? frame.cols - 1 : bottom_right_x;
                top_left_y = top_left_y < 0 ? 0 : top_left_y;
                bottom_right_y = bottom_right_y > frame.rows - 1 ? frame.rows - 1 : bottom_right_y;
 
                topLeft.x = top_left_x;
                topLeft.y = top_left_y;
                bottomRight.x = bottom_right_x;
                bottomRight.y = bottom_right_y;
 
                // 測距模塊推送實際像素大小的ROI
                config.roi = dai::Rect(topLeft, bottomRight);
                dai::SpatialLocationCalculatorConfig cfg;
                cfg.addROI(config);
                spatialCalcConfigInQueue->send(cfg);
                std::vector spatialData = spatialCalcQueue->get()->getSpatialLocations();
 
                for (auto &depthData : spatialData) {
                    auto roi = depthData.config.roi;
                    roi = roi.denormalize(depthFrameColor.cols, depthFrameColor.rows);
                    auto xmin = (int) roi.topLeft().x;
                    auto ymin = (int) roi.topLeft().y;
                    auto xmax = (int) roi.bottomRight().x;
                    auto ymax = (int) roi.bottomRight().y;
 
                    // 最值限定
//                    xmin = xmin < 0 ? 0 : xmin;
//                    xmax = xmax > frame.cols - 1 ? frame.cols - 1 : xmax;
//                    ymin = ymin < 0 ? 0 : ymin;
//                    ymax = ymax > frame.rows - 1 ? frame.rows - 1 : ymax;
 
                    auto coords = depthData.spatialCoordinates;
                    auto distance = std::sqrt(coords.x * coords.x + coords.y * coords.y + coords.z * coords.z);
                    auto fontType = cv::FONT_HERSHEY_TRIPLEX;
 
                    std::stringstream rgb_depthX, depthX, rgb_depthX_;
                    rgb_depthX << "X: " << (int) coords.x << " mm";
                    rgb_depthX_.precision(2);
                    rgb_depthX_ << "dis: " << std::fixed << static_cast(distance) << " mm";
 
                    cv::rectangle(frame,
                                  cv::Point(xmin, ymin), cv::Point(xmax, ymax)),
                                  color,
                                  fontType);
 
                    cv::putText(frame, rgb_depthX_.str(), cv::Point(xmin + 10, ymin - 20),
                                fontType,
                                0.5, color1);
 
                    cv::putText(frame, rgb_depthX.str(), cv::Point(xmin + 10, ymin + 20),
                                fontType,
                                0.5, color1);
                    std::stringstream rgb_depthY, depthY;
                    rgb_depthY << "Y: " << (int) coords.y << " mm";
                    cv::putText(frame, rgb_depthY.str(), cv::Point(xmin + 10, ymin + 35),
                                fontType,
                                0.5, color1);
                    std::stringstream rgb_depthZ, depthZ;
                    rgb_depthZ << "Z: " << (int) coords.z << " mm";
                    cv::putText(frame, rgb_depthZ.str(), cv::Point(xmin + 10, ymin + 50),
                                fontType,
                                0.5, color1);
 
 
                    cv::rectangle(depthFrameColor,
                            cv::Point(xmin, ymin), cv::Point(xmax, ymax)),
                            color,
                            fontType);
                    depthX << "X: " << (int) coords.x << " mm";
                    cv::putText(depthFrameColor, depthX.str(), cv::Point(xmin + 10, ymin + 20),
                                fontType, 0.5, color1);
                    depthY << "Y: " << (int) coords.y << " mm";
                    cv::putText(depthFrameColor, depthY.str(), cv::Point(xmin + 10, ymin + 35),
                                fontType, 0.5, color1);
                    depthZ << "Z: " << (int) coords.z << " mm";
                    cv::putText(depthFrameColor, depthZ.str(), cv::Point(xmin + 10, ymin + 50),
                                fontType, 0.5, color1);
                }
            }
 
            std::stringstream fpsStr;
            fpsStr << "NN fps: " << std::fixed << std::setprecision(2) << fps;
//            printf("fps %f
",fps);
            cv::putText(src, fpsStr.str(), cv::Point(4, 22), cv::FONT_HERSHEY_TRIPLEX, 1,
                        cv::Scalar(0, 255, 0));
            cv::putText(frame, fpsStr.str(), cv::Point(4, 22), cv::FONT_HERSHEY_TRIPLEX, 1,
                        cv::Scalar(0, 255, 0));
 
            // Show the frame
//            cv::imshow("src", src);
            cv::imshow("frame", frame);
            cv::imwrite("frame.jpg", frame);
//            cv::imshow("depth", depthFrameColor);
            int key = cv::waitKey(1);
            if(key == 'q' || key == 'Q' || key == 27) {
                return 0;
            }
        }
    }
}

bdd81016-d969-11ef-9310-92fbcf53809c.jpg

然后將在相機端的推理代碼踢掉,使用本地開發(fā)板哪吒進行推理,然后整體替換OpenVINO推理方式:

(1)先改個用編解碼的方式獲取相機,測距,使用CPU進行純h264解碼,純CPU解碼30幀左右,看樣子還行,這小板子的CPU軟解看著還湊合。

cmakelists.txt

cmake_minimum_required(VERSION 3.16)
project(demo)
set(CMAKE_CXX_STANDARD 11)
find_package(OpenCV REQUIRED)
#message(STATUS ${OpenCV_INCLUDE_DIRS})
#添加頭文件
include_directories(${OpenCV_INCLUDE_DIRS})
include_directories(${CMAKE_SOURCE_DIR}/include)
include_directories(${CMAKE_SOURCE_DIR}/include/utility)
#鏈接Opencv庫
find_package(depthai CONFIG REQUIRED)
add_executable(demo main.cpp include/utility/utility.cpp)
target_link_libraries(demo ${OpenCV_LIBS} depthai::opencv -lavformat -lavcodec -lswscale -lavutil -lz)

main.cpp

#include 
#include 
#include 
#include 
#include 
#include 
#include 
extern "C"
{
#include 
#include 
#include 
#include 
}
 
 
#include "utility.hpp"
 
#include "depthai/depthai.hpp"
 
using namespace std::chrono;
 
int main(int argc, char **argv) {
  dai::Pipeline pipeline;
  //定義
  auto cam = pipeline.create();
  cam->setBoardSocket(dai::RGB);
  cam->setResolution(dai::THE_1080_P);
  cam->setVideoSize(1920, 1080);
  cam->setFps(30);
  auto Encoder = pipeline.create();
  Encoder->setDefaultProfilePreset(cam->getVideoSize(), cam->getFps(),
                   dai::H265_MAIN);
 
 
  cam->video.link(Encoder->input);
 
  auto monoLeft = pipeline.create();
  auto monoRight = pipeline.create();
  auto stereo = pipeline.create();
  auto spatialLocationCalculator = pipeline.create();
 
  auto xoutDepth = pipeline.create();
  auto xoutSpatialData = pipeline.create();
  auto xinSpatialCalcConfig = pipeline.create();
  auto xoutRgb = pipeline.create();
  xoutDepth->setStreamName("depth");
  xoutSpatialData->setStreamName("spatialData");
  xinSpatialCalcConfig->setStreamName("spatialCalcConfig");
  xoutRgb->setStreamName("rgb");
 
  monoLeft->setResolution(dai::THE_400_P);
  monoLeft->setBoardSocket(dai::LEFT);
  monoRight->setResolution(dai::THE_400_P);
  monoRight->setBoardSocket(dai::RIGHT);
 
  stereo->setDefaultProfilePreset(dai::HIGH_ACCURACY);
  stereo->setLeftRightCheck(true);
  stereo->setExtendedDisparity(true);
  spatialLocationCalculator->inputConfig.setWaitForMessage(false);
 
 
  dai::SpatialLocationCalculatorConfigData config;
  config.depthThresholds.lowerThreshold = 200;
  config.depthThresholds.upperThreshold = 10000;
  config.roi = dai::Point2f( 0.1, 0.45), dai::Point2f(( 1) * 0.1, 0.55));
  spatialLocationCalculator->initialConfig.addROI(config);
 
  // Linking
  monoLeft->out.link(stereo->left);
  monoRight->out.link(stereo->right);
 
  spatialLocationCalculator->passthroughDepth.link(xoutDepth->input);
  stereo->depth.link(spatialLocationCalculator->inputDepth);
 
  spatialLocationCalculator->out.link(xoutSpatialData->input);
  xinSpatialCalcConfig->out.link(spatialLocationCalculator->inputConfig);
 
 
  //定義輸出
  auto xlinkoutpreviewOut = pipeline.create();
  xlinkoutpreviewOut->setStreamName("out");
 
  Encoder->bitstream.link(xlinkoutpreviewOut->input);
 
 
  //結構推送相機
  dai::Device device(pipeline);
  device.setIrLaserDotProjectorBrightness(1000);
 
  //取幀顯示
  auto outqueue = device.getOutputQueue("out", cam->getFps(), false);//maxsize 代表緩沖數據
  auto depthQueue = device.getOutputQueue("depth", 4, false);
  auto spatialCalcQueue = device.getOutputQueue("spatialData", 4, false);
 
  //auto videoFile = std::ofstream("video.h265", std::binary);
 
 
  int width = 1920;
  int height = 1080;
  AVCodec *pCodec = avcodec_find_decoder(AV_CODEC_ID_H265);
  AVCodecContext *pCodecCtx = avcodec_alloc_context3(pCodec);
  int ret = avcodec_open2(pCodecCtx, pCodec, NULL);
  if (ret < 0) {//打開解碼器
 ? ? ? ?printf("Could not open codec.
");
 ? ? ? ?return -1;
 ? ?}
 ? ?AVFrame *picture = av_frame_alloc();
 ? ?picture->width = width;
  picture->height = height;
  picture->format = AV_PIX_FMT_YUV420P;
  ret = av_frame_get_buffer(picture, 1);
  if (ret < 0) {
 ? ? ? ?printf("av_frame_get_buffer error
");
 ? ? ? ?return -1;
 ? ?}
 ? ?AVFrame *pFrame = av_frame_alloc();
 ? ?pFrame->width = width;
  pFrame->height = height;
  pFrame->format = AV_PIX_FMT_YUV420P;
  ret = av_frame_get_buffer(pFrame, 1);
  if (ret < 0) {
 ? ? ? ?printf("av_frame_get_buffer error
");
 ? ? ? ?return -1;
 ? ?}
 ? ?AVFrame *pFrameRGB = av_frame_alloc();
 ? ?pFrameRGB->width = width;
  pFrameRGB->height = height;
  pFrameRGB->format = AV_PIX_FMT_RGB24;
  ret = av_frame_get_buffer(pFrameRGB, 1);
  if (ret < 0) {
 ? ? ? ?printf("av_frame_get_buffer error
");
 ? ? ? ?return -1;
 ? ?}
 
 
 ? ?int picture_size = av_image_get_buffer_size(AV_PIX_FMT_YUV420P, width, height,
 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?1);//計算這個格式的圖片,需要多少字節(jié)來存儲
 ? ?uint8_t *out_buff = (uint8_t *) av_malloc(picture_size * sizeof(uint8_t));
 ? ?av_image_fill_arrays(picture->data, picture->linesize, out_buff, AV_PIX_FMT_YUV420P, width,
             height, 1);
  //這個函數 是緩存轉換格式,可以不用 以為上面已經設置了AV_PIX_FMT_YUV420P
  SwsContext *img_convert_ctx = sws_getContext(width, height, AV_PIX_FMT_YUV420P,
                         width, height, AV_PIX_FMT_RGB24, 4,
                         NULL, NULL, NULL);
  AVPacket *packet = av_packet_alloc();
 
  auto startTime = steady_clock::now();
  int counter = 0;
  float fps = 0;
  auto spatialCalcConfigInQueue = device.getInputQueue("spatialCalcConfig");
  while (true) {
    counter++;
    auto currentTime = steady_clock::now();
    auto elapsed = duration_cast>(currentTime - startTime);
    if (elapsed > seconds(1)) {
      fps = counter / elapsed.count();
      counter = 0;
      startTime = currentTime;
    }
 
 
 
 
    auto h265Packet = outqueue->get();
 
 
    //videoFile.write((char *) (h265Packet->getData().data()), h265Packet->getData().size());
 
    packet->data = (uint8_t *) h265Packet->getData().data();  //這里填入一個指向完整H264數據幀的指針
    packet->size = h265Packet->getData().size();    //這個填入H265 數據幀的大小
    packet->stream_index = 0;
    ret = avcodec_send_packet(pCodecCtx, packet);
    if (ret < 0) {
 ? ? ? ? ? ?printf("avcodec_send_packet 
");
 ? ? ? ? ? ?continue;
 ? ? ? ?}
 ? ? ? ?av_packet_unref(packet);
 ? ? ? ?int got_picture = avcodec_receive_frame(pCodecCtx, pFrame);
 ? ? ? ?av_frame_is_writable(pFrame);
 ? ? ? ?if (got_picture < 0) {
 ? ? ? ? ? ?printf("avcodec_receive_frame 
");
 ? ? ? ? ? ?continue;
 ? ? ? ?}
 
 ? ? ? ?sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0,
         height,
         pFrameRGB->data, pFrameRGB->linesize);
 
 
    cv::Mat mRGB(cv::Size(width, height), CV_8UC3);
    mRGB.data = (unsigned char *) pFrameRGB->data[0];
    cv::Mat mBGR;
    cv::cvtColor(mRGB, mBGR, cv::COLOR_RGB2BGR);
    std::stringstream fpsStr;
    fpsStr << "NN fps: " << std::fixed << std::setprecision(2) << fps;
 ? ? ? ?printf("fps %f
",fps);
 ? ? ? ?cv::putText(mBGR, fpsStr.str(), cv::Point(4, 22), cv::FONT_HERSHEY_TRIPLEX, 0.4,
 ? ? ? ? ? ? ? ? ? ?cv::Scalar(0, 255, 0));
 
 
 ? ? ? ?config.roi = dai::Point2f(3 * 0.1, 0.45), dai::Point2f((3 + 1) * 0.1, 0.55));
 ? ? ? ?dai::SpatialLocationCalculatorConfig cfg;
 ? ? ? ?cfg.addROI(config);
 ? ? ? ?spatialCalcConfigInQueue->send(cfg);
 
    // auto inDepth = depthQueue->get();
    //cv::Mat depthFrame = inDepth->getFrame(); // depthFrame values are in millimeters
 
 
    auto spatialData = spatialCalcQueue->get()->getSpatialLocations();
    for(auto depthData : spatialData) {
      auto roi = depthData.config.roi;
      roi = roi.denormalize(mBGR.cols, mBGR.rows);
 
      auto xmin = static_cast(roi.topLeft().x);
      auto ymin = static_cast(roi.topLeft().y);
      auto xmax = static_cast(roi.bottomRight().x);
      auto ymax = static_cast(roi.bottomRight().y);
 
      auto coords = depthData.spatialCoordinates;
      auto distance = std::sqrt(coords.x * coords.x + coords.y * coords.y + coords.z * coords.z);
      auto color = cv::Scalar(0, 200, 40);
      auto fontType = cv::FONT_HERSHEY_TRIPLEX;
      cv::rectangle(mBGR, cv::Point(xmin, ymin), cv::Point(xmax, ymax)), color);
      std::stringstream depthDistance;
      depthDistance.precision(2);
      depthDistance << std::fixed << static_cast(distance / 1000.0f) << "m";
 ? ? ? ? ? ?cv::putText(mBGR, depthDistance.str(), cv::Point(xmin + 10, ymin + 20), fontType, 0.5, color);
 ? ? ? ?}
 
 
 
 ? ? ? ?cv::imshow("demo", mBGR);
 ? ? ? ?cv::imwrite("demo.jpg",mBGR);
 
 ? ? ? ?cv::waitKey(1);
 
 
 ? ?}
 
 
 ? ?return 0;
}

整個代碼在哪吒開發(fā)板上進行解碼,幀率達到30fps左右,還可以,圖片就不上傳了,大家可以自己評測,前提安裝ffmpeg這個庫。

(2)v8的模型轉換和開發(fā)板上推理,這個地方一定要保證opset=11,如果是14是不可以的,模型轉換可以在開發(fā)板上轉換就行。

ubuntu@ubuntu:~$ pip install ultralytics轉換代碼

ubuntu@ubuntu:~$ cat convert_yolov8.py
from ultralytics import YOLO
 
# Load a model
model = YOLO("yolov8n.yaml") # build a new model from scratch
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
 
# Use the model
# model.train(data="coco8.yaml", epochs=3) # train the model
# metrics = model.val() # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
path = model.export(format="onnx") # export the model to ONNX format
path = model.export(format="openvino",opset=11) # export the model to ONNX format
cmkelists.txt


cmake_minimum_required(VERSION 3.12)
project(yolov8_openvino_example)
 
set(CMAKE_CXX_STANDARD 14)
 
find_package(OpenCV REQUIRED)
 
include_directories(
  ${OpenCV_INCLUDE_DIRS}
  /opt/intel/openvino_2023/runtime/include
)
 
add_executable(detect 
  main.cc
  inference.cc
)
 
target_link_libraries(detect
  ${OpenCV_LIBS}
   /opt/intel/openvino_2023/runtime/lib/intel64/libopenvino.so
)

測試代碼使用官方的即可 ultralytics/examples/YOLOv8-OpenVINO-CPP-Inference at main · ultralytics/ultralytics · GitHub

be01e6a2-d969-11ef-9310-92fbcf53809c.jpg

(3)增加板子使用OpenVINO推理+板子CPU/ffmpeg解碼+推流;oak相機測距代碼就不添加了。

be2a7784-d969-11ef-9310-92fbcf53809c.png

發(fā)現這個模型還是比較重,添加到推理端有點小卡,先不加了,先用CPU進行編解碼推流吧,測試目錄和GitHub地址如下,效果圖如下:

be50bcdc-d969-11ef-9310-92fbcf53809c.png

拉流設置命令

github:https://github.com/sxj731533730/OAK_Rtserver.git

參考資料:

[1] OAK相機如何將yoloV5lite模型轉換成blob格式?_oak china yolov5模型轉換-CSDN博客

https://blog.csdn.net/oakchina/article/details/129403986

[2]https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/pose-estimation-webcam

聲明:本文內容及配圖由入駐作者撰寫或者入駐合作網站授權轉載。文章觀點僅代表作者本人,不代表電子發(fā)燒友網立場。文章及其配圖僅供工程師學習之用,如有內容侵權或者其他違規(guī)問題,請聯(lián)系本站處理。 舉報投訴
  • 英特爾
    +關注

    關注

    61

    文章

    10247

    瀏覽量

    178494
  • 開發(fā)板
    +關注

    關注

    25

    文章

    6026

    瀏覽量

    110610

原文標題:開發(fā)者實戰(zhàn)|英特爾開發(fā)板試用:結合oak深度相機進行評測

文章出處:【微信號:英特爾物聯(lián)網,微信公眾號:英特爾物聯(lián)網】歡迎添加關注!文章轉載請注明出處。

收藏 人收藏
加入交流群
微信小助手二維碼

掃碼添加小助手

加入工程師交流群

    評論

    相關推薦
    熱點推薦

    01 Studio K230開發(fā)板開箱評測

    Studio K230開發(fā)板開箱評測
    的頭像 發(fā)表于 06-28 14:26 ?2195次閱讀
    01 Studio K230<b class='flag-5'>開發(fā)板</b>開箱<b class='flag-5'>評測</b>

    【免費試用】開發(fā)板評測大賽開啟!OH 、RISC-V、Rockchip頂級開發(fā)板等你試用~

    技術人的狂歡,開發(fā)者的盛宴!2025年最值得期待的硬核賽事——電子發(fā)燒友開發(fā)板評測大賽正式啟動!無論你是開源生態(tài)的探索者、芯片架構的極客,還是物聯(lián)網領域的創(chuàng)新達人,本次大賽三大賽
    的頭像 發(fā)表于 06-05 08:05 ?582次閱讀
    【免費試用】<b class='flag-5'>開發(fā)板</b><b class='flag-5'>評測</b>大賽開啟!OH 、RISC-V、Rockchip頂級<b class='flag-5'>開發(fā)板</b>等你試用~

    評測試用】合眾HZ-T536開發(fā)板免費試用體驗

    評測試用】合眾HZ-T536開發(fā)板免費試用體驗
    的頭像 發(fā)表于 05-27 08:05 ?523次閱讀
    【<b class='flag-5'>評測</b>試用】合眾HZ-T536<b class='flag-5'>開發(fā)板</b>免費試用體驗

    英特爾發(fā)布全新GPU,AI和工作站迎來新選擇

    英特爾推出面向準專業(yè)用戶和AI開發(fā)者的英特爾銳炫Pro GPU系列,發(fā)布英特爾? Gaudi 3 AI加速器機架級和PCIe部署方案 ? 2025 年 5 月 19 日,北京 ——今日
    發(fā)表于 05-20 11:03 ?1602次閱讀

    開發(fā)板評測大賽開啟!頂級開發(fā)板等你來戰(zhàn)!

    技術人的狂歡,開發(fā)者的盛宴!2025年最值得期待的硬核賽事——電子發(fā)燒友開發(fā)板評測大賽正式啟動!無論你是開源生態(tài)的探索者、芯片架構的極客,還是物聯(lián)網領
    的頭像 發(fā)表于 05-20 08:07 ?316次閱讀
    <b class='flag-5'>開發(fā)板</b><b class='flag-5'>評測</b>大賽開啟!頂級<b class='flag-5'>開發(fā)板</b>等你來戰(zhàn)!

    請問OpenVINO?工具套件英特爾?Distribution是否與Windows? 10物聯(lián)網企業(yè)版兼容?

    無法在基于 Windows? 10 物聯(lián)網企業(yè)版的目標系統(tǒng)上使用 英特爾? Distribution OpenVINO? 2021* 版本推斷模型。
    發(fā)表于 03-05 08:32

    英特爾?獨立顯卡與OpenVINO?工具套件結合使用時,無法運行推理怎么解決?

    使用英特爾?獨立顯卡與OpenVINO?工具套件時無法運行推理
    發(fā)表于 03-05 06:56

    英特爾任命王稚聰擔任中國區(qū)副董事長

    英特爾公司宣布,任命王稚聰先生擔任新設立的英特爾中國區(qū)副董事長一職。王稚聰將全面負責管理英特爾中國的業(yè)務運營,直接向英特爾公司高級副總裁、英特爾
    的頭像 發(fā)表于 03-03 10:54 ?774次閱讀

    英特爾帶您解鎖云上智算新引擎

    在近日舉辦的2024火山引擎FORCE原動力大會上,英特爾與火山引擎聯(lián)合發(fā)布基于英特爾 至強 6 性能核處理器的第四代服務器實例,以打造彈性算力底座的產品化實踐。同時,英特爾也攜手扣子共同推出Coze-AIPC端云協(xié)同智能體
    的頭像 發(fā)表于 12-23 14:05 ?1142次閱讀

    基于英特爾開發(fā)板開發(fā)ROS應用

    隨著智能機器人技術的快速發(fā)展,越來越多的研究者和開發(fā)者開始涉足這一充滿挑戰(zhàn)和機遇的領域。哪吒開發(fā)板,作為一款高性能的機器人開發(fā)平臺,憑借其強大的計算能力和豐富的接口,為機器人愛好者和專業(yè)人士提供了一個理想的實驗和
    的頭像 發(fā)表于 12-20 10:54 ?2071次閱讀
    基于<b class='flag-5'>英特爾</b><b class='flag-5'>開發(fā)板</b><b class='flag-5'>開發(fā)</b>ROS應用

    英特爾推出全新英特爾銳炫B系列顯卡

    英特爾銳炫B580和B570 GPU以卓越價值為時新游戲帶來超凡表現。 ? > 今日,英特爾發(fā)布全新英特爾銳炫 B系列顯卡(代號Battlemage)。英特爾銳炫 B580和B570
    的頭像 發(fā)表于 12-07 10:16 ?1775次閱讀
    <b class='flag-5'>英特爾</b>推出全新<b class='flag-5'>英特爾</b>銳炫B系列顯卡

    英特爾換帥 英特爾CEO Pat Gelsinger(帕特·基辛格)正式退休

    2024年12月1日,英特爾CEO? Pat Gelsinger(帕特·基辛格)正式退休,并辭去公司董事會職務?;粮裨?b class='flag-5'>英特爾公司供職長達40余年,于1979年加入。在2021年,基辛格成為英特爾
    的頭像 發(fā)表于 12-04 14:58 ?1120次閱讀

    英特爾CEO Gelsinger宣布退休

    近日,英特爾公司宣布其首席執(zhí)行官Pat Gelsinger即將退休。這一消息發(fā)布后,英特爾的美股在盤前交易中上漲了近4%。同時,英特爾宣布任命Zinsner和Johnston Holthaus為臨時
    的頭像 發(fā)表于 12-03 10:55 ?804次閱讀

    基于哪吒開發(fā)板部署YOLOv8模型

    2024英特爾 “走近開發(fā)者”互動活動-哪吒開發(fā)套件免費試 用 AI 創(chuàng)新計劃:哪吒開發(fā)板是專為支持入門級邊緣 AI 應用程序和設備而設計,能夠滿足人工智能學習、
    的頭像 發(fā)表于 11-15 14:13 ?1402次閱讀
    基于哪吒<b class='flag-5'>開發(fā)板</b>部署YOLOv8模型

    美國政府擬增援英特爾

    據外媒報道,為了避免英特爾的財務繼續(xù)惡化,華盛頓已在考慮可能的援助方案;其中一種可能的方案是英特爾芯片設計業(yè)務與其他同行公司合并,比如AMD、Marvell等這些同行。 據悉,英特爾近年來投入大筆
    的頭像 發(fā)表于 11-04 15:08 ?818次閱讀