Sr. AI Engineer (Generative Vision / Realistic-Novel-View Synthesis)
직군
Engineering
경력사항
경력 5년 이상
고용형태
정규직
근무지
서울대한민국 서울특별시 서초구 강남대로 363



This position ​is ​with ​STRADVISION Seoul ​Office


[About STRADVISION] 

We Empower Everything ​To ​Perceive Intelligently 

With ​a mission statement ​of “We ​Empower ​Everything To ​Perceive ​Intelligently”, ​STRADVISION is putting ​all ​of our effort ​to ​make ​better life for ​everyone through ​AI-based ​camera perception ​technology. Everyday, ​we ​focus on creating ​AI-based vision ​perception techonolgy with more than 300 members across 8 offices worldwide and we expect our software to perceive everything precisely & intelligently to make 1% difference in people’s lives. Thus, we are looking for members who would like to join our meaningful journey and face challenges that no one has done it before together at STRADVISION.     

  

[Our Story] 

🔎 STRADVISION Seoul Office

🔎 STRADVISION’s Core Technology

🔎 STRADVISION's Welfare

 

[Our Technology]

🚗 STRADVISION's Technology

  • STRADVISION is the FIRST deep-learning based technology start-up company in the world who has obtained ASPICE CL2 certification in 2019.
  • STRADVISION has also been honored with the AutoSens Awards for ‘Best-in-Class Software for Perception Systems’(Gold Award Winner) for 2 years in a row(2021, 2022).   
  • STRADVISION’s outstanding technology was recognized worldwide by successfully completing the Series C funding at KRW 107.6 billion with Aptiv and ZF Group in August, 2022.
  • About 167 patents related to autonomous driving/ADAS have been acquired in Korea, Japan, US and Europe. As of today, STRADVISION is actively developing our technology to be differentiated.
  • By the first half of 2025, vehicles equipped with SVNet surpassed 4 million units globally, maintaining growth despite economic slowdown and intensifying industry competition.
  • In 2025, STRADVISION was ranked 10th in domestic AI competitiveness, following companies such as Samsung, Naver, and LG.



[Mission of the Role]

Build and productionize novel-view synthesis (side/rear views) and a semantically consistent BEV/Occupancy-based “Vector-Space/World Model” on top of our MV-Gen2 multi-camera vision stack. You will use front-camera video, LiDAR, and CAN/IMU/GNSS logs to achieve high-fidelity, real-time scene reconstruction, prediction, and synthesis, and ship production-grade models/pipelines that integrate with Path Planning/Control.

This role is a unique opportunity to work on high-impact, cutting-edge research that directly contributes to the development of next-generation autonomous driving systems.



[Key Responsibilities]

The selected candidate will be responsible for designing, developing, and optimizing deep learning models for Generative Vision / BEV & Realistic-Novel-View Synthesis for ADAS/Autonomy.

  • Generative Vision / World Model R&D
  • Design scene understanding, prediction, and synthesis using BEV/Occupancy/3D representations (generate side/rear camera views with strong texture/geometry consistency).
  • Multi-Sensor Fusion
  • Fuse front/surround cameras + LiDAR + CAN/IMU/GNSS for a 4D dynamic scene representation, including ego-motion/pose estimation.
  • Model Architecture
  • Use Transformer/Diffusion/NeRF-variants/Gaussian Splatting/Video Autoencoders to enforce spatio-temporal consistency under geometric constraints.
  • Training Pipeline
  • Build large-scale training/eval pipelines for in-house driving logs and public datasets (nuScenes/Waymo/Argoverse2, etc.), including self-supervised and weakly supervised learning.
  • Real-Time Optimization
  • Optimize for CUDA/TensorRT/ONNX and (optionally) TDA4VM/Orin; manage multithreading and latency/memory budgets.
  • Quantitative Evaluation
  • Track PSNR/SSIM/LPIPS + geometric consistency (Depth/Flow/Epipolar), BEV/Occupancy IoU/mAP, temporal stability, and end-to-end planning impact.
  • Production Integration
  • Integrate Perception → Planning/Simulation (replay/augmentation), automate data generation/augmentation and QA gates.
  • Research-to-Production
  • Monitor literature, inject domain constraints, and bridge SOTA to production using ablations, KD/distillation, and pragmatic trade-offs.


[Basic Qualifications]

  • Master’s degree with 5+ years of relevant industry experience or Ph.D. with 1+ years (or equivalent), with a total of 5–8+ years of hands-on Deep Learning experience across computer vision, machine learning, or robotics domains.
  • Strong programming skills : Python required, with C++/CUDA preferred; able to design systems with careful consideration of performance-memory-latency trade-offs.
  • Strong foundation in 3D geometry and multiview perception, including camera intrinsics/extrinsics/distortion modeling, coordinate transformations, PnP and bundle adjustment, depth/optical flow estimation, and BEV/occupancy representations.
  • Hands-on experience with Temporal/Transformer/Diffusion (at least 1 stack), including large-scale training and hyperparameter tuning.
  • Demonstrated expertise in building large-scale video and multiview training pipelines in PyTorch, including distributed training, mixed precision, checkpointing, logging, and replay mechanisms.
  • Experience in driving log data engineering(video-LiDAR-CAN alignment, timestamping and sensor drift handling).



[Preferred Qualifications]

  • Real-world depolyments experiece with Novel-View Synthesis, NeRF, Gaussian Splatting, or Video Diffusion including handling dynamics, occlusion, reflection, and illumination variations.
  • Experience with BEV(Bird’s Eye View), Occupancy, or Vector-Space representations within autonomous driving stracks, covering modules such as object detection, lane detection, tracking, and mapping (at least one module)
  • Knowledge and practical experience in Self/Weak/Semi-supervised learning, as well as Knowlege Distillation / Teacher-Student frameworks, and synthetic-to-real domain adaptation.
  • Familiarity with Embedded or Autonomous SoCs (NVIDIA Orin/TensorRT, TI TDA4/TIDL) with awareness of real-time constraints.
  • Hands-on experience with Simulation and Replay frameworks(CARLA, ScenarioRunner, and internal replay tools) for data generation and augmentations.
  • Hands-on experience working with large-scale driving datasets such as nuScenes, Waymo, Argoverse2, KITTI-360, as well as proprietary in-house driving logs.
  • Demonstrated contributions to the research community, such as publications or reviews in CVPR, ICCV, ECCV, NeurlIPS, or ICLR, and/or competition wins or leadership roles.



[Application]

  • Required: Resume / Thesis (for those who have a Master’s degree or above.)
  • Optional: Cover Letter, Research/Project Portfolio (including publications, open-source projects, or patents), Other theses
  • Please include detailed information on your relevant work or research experience in your résumé and/or portfolio.
  • For Master’s/Ph.D. degree holders: Please provide detailed project and/or research experience, such as a portfolio or equivalent documentation.


[Recruitment Process]

  • Application Review > Recruiter Phone Screening(if required) > Coding Test > Tech Interview(s) > Reference Check (above 5yrs) > Offer > Onboarding
  • Please be aware of that the recruitment processes & schedules may be changed depending on the job and/or other circumstances.


[Others]

  • This position is open for ongoing recruitment and may close earlier once the role has been successfully filled.
  • A 3-month probationary period will apply after joining as a regular employee.
  • Visa sponsorship can be supported for eligible international candidates.
  • If any false or inaccurate information is identified during the hiring process, the recruitment may be discontinued or an offer may be withdrawn.



STRADVISION stands for an open and respectful corporate culture because we believe the diversity helps us to find new perspectives.

STRADVISION ensures that all our members have equal opportunities –regardless of age, ethnic origin and nationality, gender and gender identity, physical and mental abilities, religion and belief, sexual orientation, and social background. We always ensure diversity right from the recruitment stage and therefore make hiring decisions based on candidate’s actual competencies, qualifications, and business needs at the point of the time.

Please feel free to contact us via our talent acquisition team e-mail if you have any questions.

[STRADVISION HR Team e-mail: [email protected]]

공유하기
Sr. AI Engineer (Generative Vision / Realistic-Novel-View Synthesis)



This position ​is ​with ​STRADVISION Seoul ​Office


[About STRADVISION] 

We Empower Everything ​To ​Perceive Intelligently 

With ​a mission statement ​of “We ​Empower ​Everything To ​Perceive ​Intelligently”, ​STRADVISION is putting ​all ​of our effort ​to ​make ​better life for ​everyone through ​AI-based ​camera perception ​technology. Everyday, ​we ​focus on creating ​AI-based vision ​perception techonolgy with more than 300 members across 8 offices worldwide and we expect our software to perceive everything precisely & intelligently to make 1% difference in people’s lives. Thus, we are looking for members who would like to join our meaningful journey and face challenges that no one has done it before together at STRADVISION.     

  

[Our Story] 

🔎 STRADVISION Seoul Office

🔎 STRADVISION’s Core Technology

🔎 STRADVISION's Welfare

 

[Our Technology]

🚗 STRADVISION's Technology

  • STRADVISION is the FIRST deep-learning based technology start-up company in the world who has obtained ASPICE CL2 certification in 2019.
  • STRADVISION has also been honored with the AutoSens Awards for ‘Best-in-Class Software for Perception Systems’(Gold Award Winner) for 2 years in a row(2021, 2022).   
  • STRADVISION’s outstanding technology was recognized worldwide by successfully completing the Series C funding at KRW 107.6 billion with Aptiv and ZF Group in August, 2022.
  • About 167 patents related to autonomous driving/ADAS have been acquired in Korea, Japan, US and Europe. As of today, STRADVISION is actively developing our technology to be differentiated.
  • By the first half of 2025, vehicles equipped with SVNet surpassed 4 million units globally, maintaining growth despite economic slowdown and intensifying industry competition.
  • In 2025, STRADVISION was ranked 10th in domestic AI competitiveness, following companies such as Samsung, Naver, and LG.



[Mission of the Role]

Build and productionize novel-view synthesis (side/rear views) and a semantically consistent BEV/Occupancy-based “Vector-Space/World Model” on top of our MV-Gen2 multi-camera vision stack. You will use front-camera video, LiDAR, and CAN/IMU/GNSS logs to achieve high-fidelity, real-time scene reconstruction, prediction, and synthesis, and ship production-grade models/pipelines that integrate with Path Planning/Control.

This role is a unique opportunity to work on high-impact, cutting-edge research that directly contributes to the development of next-generation autonomous driving systems.



[Key Responsibilities]

The selected candidate will be responsible for designing, developing, and optimizing deep learning models for Generative Vision / BEV & Realistic-Novel-View Synthesis for ADAS/Autonomy.

  • Generative Vision / World Model R&D
  • Design scene understanding, prediction, and synthesis using BEV/Occupancy/3D representations (generate side/rear camera views with strong texture/geometry consistency).
  • Multi-Sensor Fusion
  • Fuse front/surround cameras + LiDAR + CAN/IMU/GNSS for a 4D dynamic scene representation, including ego-motion/pose estimation.
  • Model Architecture
  • Use Transformer/Diffusion/NeRF-variants/Gaussian Splatting/Video Autoencoders to enforce spatio-temporal consistency under geometric constraints.
  • Training Pipeline
  • Build large-scale training/eval pipelines for in-house driving logs and public datasets (nuScenes/Waymo/Argoverse2, etc.), including self-supervised and weakly supervised learning.
  • Real-Time Optimization
  • Optimize for CUDA/TensorRT/ONNX and (optionally) TDA4VM/Orin; manage multithreading and latency/memory budgets.
  • Quantitative Evaluation
  • Track PSNR/SSIM/LPIPS + geometric consistency (Depth/Flow/Epipolar), BEV/Occupancy IoU/mAP, temporal stability, and end-to-end planning impact.
  • Production Integration
  • Integrate Perception → Planning/Simulation (replay/augmentation), automate data generation/augmentation and QA gates.
  • Research-to-Production
  • Monitor literature, inject domain constraints, and bridge SOTA to production using ablations, KD/distillation, and pragmatic trade-offs.


[Basic Qualifications]

  • Master’s degree with 5+ years of relevant industry experience or Ph.D. with 1+ years (or equivalent), with a total of 5–8+ years of hands-on Deep Learning experience across computer vision, machine learning, or robotics domains.
  • Strong programming skills : Python required, with C++/CUDA preferred; able to design systems with careful consideration of performance-memory-latency trade-offs.
  • Strong foundation in 3D geometry and multiview perception, including camera intrinsics/extrinsics/distortion modeling, coordinate transformations, PnP and bundle adjustment, depth/optical flow estimation, and BEV/occupancy representations.
  • Hands-on experience with Temporal/Transformer/Diffusion (at least 1 stack), including large-scale training and hyperparameter tuning.
  • Demonstrated expertise in building large-scale video and multiview training pipelines in PyTorch, including distributed training, mixed precision, checkpointing, logging, and replay mechanisms.
  • Experience in driving log data engineering(video-LiDAR-CAN alignment, timestamping and sensor drift handling).



[Preferred Qualifications]

  • Real-world depolyments experiece with Novel-View Synthesis, NeRF, Gaussian Splatting, or Video Diffusion including handling dynamics, occlusion, reflection, and illumination variations.
  • Experience with BEV(Bird’s Eye View), Occupancy, or Vector-Space representations within autonomous driving stracks, covering modules such as object detection, lane detection, tracking, and mapping (at least one module)
  • Knowledge and practical experience in Self/Weak/Semi-supervised learning, as well as Knowlege Distillation / Teacher-Student frameworks, and synthetic-to-real domain adaptation.
  • Familiarity with Embedded or Autonomous SoCs (NVIDIA Orin/TensorRT, TI TDA4/TIDL) with awareness of real-time constraints.
  • Hands-on experience with Simulation and Replay frameworks(CARLA, ScenarioRunner, and internal replay tools) for data generation and augmentations.
  • Hands-on experience working with large-scale driving datasets such as nuScenes, Waymo, Argoverse2, KITTI-360, as well as proprietary in-house driving logs.
  • Demonstrated contributions to the research community, such as publications or reviews in CVPR, ICCV, ECCV, NeurlIPS, or ICLR, and/or competition wins or leadership roles.



[Application]

  • Required: Resume / Thesis (for those who have a Master’s degree or above.)
  • Optional: Cover Letter, Research/Project Portfolio (including publications, open-source projects, or patents), Other theses
  • Please include detailed information on your relevant work or research experience in your résumé and/or portfolio.
  • For Master’s/Ph.D. degree holders: Please provide detailed project and/or research experience, such as a portfolio or equivalent documentation.


[Recruitment Process]

  • Application Review > Recruiter Phone Screening(if required) > Coding Test > Tech Interview(s) > Reference Check (above 5yrs) > Offer > Onboarding
  • Please be aware of that the recruitment processes & schedules may be changed depending on the job and/or other circumstances.


[Others]

  • This position is open for ongoing recruitment and may close earlier once the role has been successfully filled.
  • A 3-month probationary period will apply after joining as a regular employee.
  • Visa sponsorship can be supported for eligible international candidates.
  • If any false or inaccurate information is identified during the hiring process, the recruitment may be discontinued or an offer may be withdrawn.



STRADVISION stands for an open and respectful corporate culture because we believe the diversity helps us to find new perspectives.

STRADVISION ensures that all our members have equal opportunities –regardless of age, ethnic origin and nationality, gender and gender identity, physical and mental abilities, religion and belief, sexual orientation, and social background. We always ensure diversity right from the recruitment stage and therefore make hiring decisions based on candidate’s actual competencies, qualifications, and business needs at the point of the time.

Please feel free to contact us via our talent acquisition team e-mail if you have any questions.

[STRADVISION HR Team e-mail: [email protected]]