Sr. Embedded AI Engineer
직군
Engineering
경력사항
경력 5년 이상
고용형태
정규직
근무지
서울대한민국 서울특별시 서초구 강남대로 363



This position ​is ​with ​STRADVISION Seoul ​Office


[About STRADVISION] 

We Empower Everything ​To ​Perceive Intelligently 

With ​a mission statement ​of “We ​Empower ​Everything To ​Perceive ​Intelligently”, ​STRADVISION is putting ​all ​of our effort ​to ​make ​better life for ​everyone through ​AI-based ​camera perception ​technology. Everyday, ​we ​focus on creating ​AI-based vision ​perception techonolgy with more than 300 members across 8 offices worldwide and we expect our software to perceive everything precisely & intelligently to make 1% difference in people’s lives. Thus, we are looking for members who would like to join our meaningful journey and face challenges that no one has done it before together at STRADVISION.     

  

[Our Story] 

🔎 STRADVISION Seoul Office

🔎 STRADVISION’s Core Technology

🔎 STRADVISION's Welfare

 

[Our Technology]

🚗 STRADVISION's Technology

  • STRADVISION is the FIRST deep-learning based technology start-up company in the world who has obtained ASPICE CL2 certification in 2019.
  • STRADVISION has also been honored with the AutoSens Awards for ‘Best-in-Class Software for Perception Systems’(Gold Award Winner) for 2 years in a row(2021, 2022).   
  • STRADVISION’s outstanding technology was recognized worldwide by successfully completing the Series C funding at KRW 107.6 billion with Aptiv and ZF Group in August, 2022.
  • About 167 patents related to autonomous driving/ADAS have been acquired in Korea, Japan, US and Europe. As of today, STRADVISION is actively developing our technology to be differentiated.
  • By the first half of 2025, vehicles equipped with SVNet surpassed 4 million units globally, maintaining growth despite economic slowdown and intensifying industry competition.
  • In 2025, STRADVISION was ranked 10th in domestic AI competitiveness, following companies such as Samsung, Naver, and LG.



[Mission of the Role]

We’re hiring an Embedded AI Engineer to productize MV-Gen2 multi-camera, Transformer-based vision perception models (e.g., object detection / lane / occupancy), and E2E models(e.g., path planner, and control) across multiple automotive SoCs—Renesas, TI (Jacinto/TDA4), Qualcomm, and NVIDIA Orin/Thor.

You will lead INT8 quantization (PTQ/QAT), model porting, runtime/accelerator optimization, and—critically—provide HW-aware redesign guidance to internal DL model teams so architectures remain deployable and efficient under SoC constraints.

This role is a unique opportunity to work on high-impact, cutting-edge research that directly contributes to the development of next-generation autonomous driving systems.


[Key Responsibilities]

The selected candidate will be responsible for designing, developing, and optimizing deep learning models for ADAS/Autonomy.


1) Quantization & Deployment (Multi-Camera Transformers)

  • Convert and deploy multi-camera Transformer perception models to embedded inference stacks using PTQ/QAT, calibration pipelines, and accuracy recovery strategies (mixed precision, selective quantization, layer-wise sensitivity).
  • Build robust export/packaging flow (e.g., PyTorch → ONNX → target runtime) and resolve conversion/runtime issues (unsupported ops, precision constraints, graph transforms).


2) HW-Aware Model Redesign Feedback (Internal Technical Leadership)

  • Provide actionable feedback to internal DL engineers on SoC-friendly Transformer design :
  • Reduce attention/FFN compute & activation memory, manage token/BEV grid size, and avoid/replace ops that break target toolchains.
  • Propose architecture patterns that preserve accuracy while meeting embedded constraints (bandwidth, SRAM/DDR pressure, operator support).
  • Codify guidelines into “design rules” (allowed/avoid ops, preferred blocks, quantization-robust practices) and drive adoption through reviews and decision logs.


3) Multi-SoC Runtime & Accelerator Optimization

  • Optimize latency/throughput/memory for real-time perception across NPU/GPU/DLA/DSP backends :
  • Renesas: DRP-AI INT8 PTQ flow (calibration-based static quantization) and translation pipeline integration.
  • NVIDIA Orin: TensorRT build/engine tuning, DLA constraints(INT8/FP16) and GPU–DLA workload partitioning.
  • Qualcomm: ONNX Runtime QNN Execution Provider/ QNN SDK-based acceleration strategy and deployment validation.
  • TI: TIDL quantization modes (PTQ/QAT) and “deployable-by-design” constraints.


4) Tooling & Performance Infrastructure

  • Establish repeatable profiling + regression harness for latency/memory/accuracy across SoCs; publish performance dashboards and release artifacts.
  • Collaborate with platform/firmware teams to unblock runtime integration and memory/IO bottlenecks.



[Basic Qualifications]

  • MS + 3 years industry or PhD + 1 years (or equivalent), with 3–6+ years hands-on DL in total (vision/ML/robotics).
  • Hands-on experience shipping DL models to embedded/edge devices (automotive/robotics/consumer edge), including INT8 quantization (PTQ/QAT) and performance tuning.
  • Strong proficiency in Python + C/C++, model debugging, profiling, and deployment automation.
  • Practical knowledge of at least one deployment stack: TensorRT, ONNX Runtime, QNN, TIDL, DRP-AI, TVM/compilers, etc.
  • Experience optimizing Transformer-based vision models (attention bottlenecks, activation memory, mixed precision, operator constraints).


[Preferred Qualifications]

  • Direct experience with Renesas / TI / Qualcomm / NVIDIA Orin automotive SoCs and their toolchains.
  • Built or owned a multi-target deployment pipeline (same model family across 2–3+ hardware targets).
  • Compiler/runtime background (graph rewriting, kernel selection, MLIR/LLVM/TVM) or NPU codegen experience.
  • ADAS/autonomy perception domain knowledge (multi-camera, BEV/occupancy style representations).
  • CVPR/ICCV/ECCV/NeurIPS/ICLR publications/reviews or competition wins/leadership.



[Application]

  • Required: Resume / Thesis (for those who have a Master’s degree or above.)
  • Optional: Cover Letter, Research/Project Portfolio (including publications, open-source projects, or patents), Other theses
  • Please include detailed information on your relevant work or research experience in your résumé and/or portfolio.
  • For Master’s/Ph.D. degree holders: Please provide detailed project and/or research experience, such as a portfolio or equivalent documentation.


[Recruitment Process]

  • Application Review > SC Screening Test > Tech Interview(s) > Reference Check (above 5yrs) > Offer > Onboarding
  • Please be aware of that the recruitment processes & schedules may be changed depending on the job and/or other circumstances.


[Others]

  • This position is open for ongoing recruitment and may close earlier once the role has been successfully filled.
  • A 3-month probationary period will apply after joining as a regular employee.
  • Visa sponsorship can be supported for eligible international candidates.
  • If any false or inaccurate information is identified during the hiring process, the recruitment may be discontinued or an offer may be withdrawn.



STRADVISION stands for an open and respectful corporate culture because we believe the diversity helps us to find new perspectives.

STRADVISION ensures that all our members have equal opportunities –regardless of age, ethnic origin and nationality, gender and gender identity, physical and mental abilities, religion and belief, sexual orientation, and social background. We always ensure diversity right from the recruitment stage and therefore make hiring decisions based on candidate’s actual competencies, qualifications, and business needs at the point of the time.

Please feel free to contact us via our talent acquisition team e-mail if you have any questions.

[STRADVISION HR Team e-mail: [email protected]]

공유하기
Sr. Embedded AI Engineer



This position ​is ​with ​STRADVISION Seoul ​Office


[About STRADVISION] 

We Empower Everything ​To ​Perceive Intelligently 

With ​a mission statement ​of “We ​Empower ​Everything To ​Perceive ​Intelligently”, ​STRADVISION is putting ​all ​of our effort ​to ​make ​better life for ​everyone through ​AI-based ​camera perception ​technology. Everyday, ​we ​focus on creating ​AI-based vision ​perception techonolgy with more than 300 members across 8 offices worldwide and we expect our software to perceive everything precisely & intelligently to make 1% difference in people’s lives. Thus, we are looking for members who would like to join our meaningful journey and face challenges that no one has done it before together at STRADVISION.     

  

[Our Story] 

🔎 STRADVISION Seoul Office

🔎 STRADVISION’s Core Technology

🔎 STRADVISION's Welfare

 

[Our Technology]

🚗 STRADVISION's Technology

  • STRADVISION is the FIRST deep-learning based technology start-up company in the world who has obtained ASPICE CL2 certification in 2019.
  • STRADVISION has also been honored with the AutoSens Awards for ‘Best-in-Class Software for Perception Systems’(Gold Award Winner) for 2 years in a row(2021, 2022).   
  • STRADVISION’s outstanding technology was recognized worldwide by successfully completing the Series C funding at KRW 107.6 billion with Aptiv and ZF Group in August, 2022.
  • About 167 patents related to autonomous driving/ADAS have been acquired in Korea, Japan, US and Europe. As of today, STRADVISION is actively developing our technology to be differentiated.
  • By the first half of 2025, vehicles equipped with SVNet surpassed 4 million units globally, maintaining growth despite economic slowdown and intensifying industry competition.
  • In 2025, STRADVISION was ranked 10th in domestic AI competitiveness, following companies such as Samsung, Naver, and LG.



[Mission of the Role]

We’re hiring an Embedded AI Engineer to productize MV-Gen2 multi-camera, Transformer-based vision perception models (e.g., object detection / lane / occupancy), and E2E models(e.g., path planner, and control) across multiple automotive SoCs—Renesas, TI (Jacinto/TDA4), Qualcomm, and NVIDIA Orin/Thor.

You will lead INT8 quantization (PTQ/QAT), model porting, runtime/accelerator optimization, and—critically—provide HW-aware redesign guidance to internal DL model teams so architectures remain deployable and efficient under SoC constraints.

This role is a unique opportunity to work on high-impact, cutting-edge research that directly contributes to the development of next-generation autonomous driving systems.


[Key Responsibilities]

The selected candidate will be responsible for designing, developing, and optimizing deep learning models for ADAS/Autonomy.


1) Quantization & Deployment (Multi-Camera Transformers)

  • Convert and deploy multi-camera Transformer perception models to embedded inference stacks using PTQ/QAT, calibration pipelines, and accuracy recovery strategies (mixed precision, selective quantization, layer-wise sensitivity).
  • Build robust export/packaging flow (e.g., PyTorch → ONNX → target runtime) and resolve conversion/runtime issues (unsupported ops, precision constraints, graph transforms).


2) HW-Aware Model Redesign Feedback (Internal Technical Leadership)

  • Provide actionable feedback to internal DL engineers on SoC-friendly Transformer design :
  • Reduce attention/FFN compute & activation memory, manage token/BEV grid size, and avoid/replace ops that break target toolchains.
  • Propose architecture patterns that preserve accuracy while meeting embedded constraints (bandwidth, SRAM/DDR pressure, operator support).
  • Codify guidelines into “design rules” (allowed/avoid ops, preferred blocks, quantization-robust practices) and drive adoption through reviews and decision logs.


3) Multi-SoC Runtime & Accelerator Optimization

  • Optimize latency/throughput/memory for real-time perception across NPU/GPU/DLA/DSP backends :
  • Renesas: DRP-AI INT8 PTQ flow (calibration-based static quantization) and translation pipeline integration.
  • NVIDIA Orin: TensorRT build/engine tuning, DLA constraints(INT8/FP16) and GPU–DLA workload partitioning.
  • Qualcomm: ONNX Runtime QNN Execution Provider/ QNN SDK-based acceleration strategy and deployment validation.
  • TI: TIDL quantization modes (PTQ/QAT) and “deployable-by-design” constraints.


4) Tooling & Performance Infrastructure

  • Establish repeatable profiling + regression harness for latency/memory/accuracy across SoCs; publish performance dashboards and release artifacts.
  • Collaborate with platform/firmware teams to unblock runtime integration and memory/IO bottlenecks.



[Basic Qualifications]

  • MS + 3 years industry or PhD + 1 years (or equivalent), with 3–6+ years hands-on DL in total (vision/ML/robotics).
  • Hands-on experience shipping DL models to embedded/edge devices (automotive/robotics/consumer edge), including INT8 quantization (PTQ/QAT) and performance tuning.
  • Strong proficiency in Python + C/C++, model debugging, profiling, and deployment automation.
  • Practical knowledge of at least one deployment stack: TensorRT, ONNX Runtime, QNN, TIDL, DRP-AI, TVM/compilers, etc.
  • Experience optimizing Transformer-based vision models (attention bottlenecks, activation memory, mixed precision, operator constraints).


[Preferred Qualifications]

  • Direct experience with Renesas / TI / Qualcomm / NVIDIA Orin automotive SoCs and their toolchains.
  • Built or owned a multi-target deployment pipeline (same model family across 2–3+ hardware targets).
  • Compiler/runtime background (graph rewriting, kernel selection, MLIR/LLVM/TVM) or NPU codegen experience.
  • ADAS/autonomy perception domain knowledge (multi-camera, BEV/occupancy style representations).
  • CVPR/ICCV/ECCV/NeurIPS/ICLR publications/reviews or competition wins/leadership.



[Application]

  • Required: Resume / Thesis (for those who have a Master’s degree or above.)
  • Optional: Cover Letter, Research/Project Portfolio (including publications, open-source projects, or patents), Other theses
  • Please include detailed information on your relevant work or research experience in your résumé and/or portfolio.
  • For Master’s/Ph.D. degree holders: Please provide detailed project and/or research experience, such as a portfolio or equivalent documentation.


[Recruitment Process]

  • Application Review > SC Screening Test > Tech Interview(s) > Reference Check (above 5yrs) > Offer > Onboarding
  • Please be aware of that the recruitment processes & schedules may be changed depending on the job and/or other circumstances.


[Others]

  • This position is open for ongoing recruitment and may close earlier once the role has been successfully filled.
  • A 3-month probationary period will apply after joining as a regular employee.
  • Visa sponsorship can be supported for eligible international candidates.
  • If any false or inaccurate information is identified during the hiring process, the recruitment may be discontinued or an offer may be withdrawn.



STRADVISION stands for an open and respectful corporate culture because we believe the diversity helps us to find new perspectives.

STRADVISION ensures that all our members have equal opportunities –regardless of age, ethnic origin and nationality, gender and gender identity, physical and mental abilities, religion and belief, sexual orientation, and social background. We always ensure diversity right from the recruitment stage and therefore make hiring decisions based on candidate’s actual competencies, qualifications, and business needs at the point of the time.

Please feel free to contact us via our talent acquisition team e-mail if you have any questions.

[STRADVISION HR Team e-mail: [email protected]]