AI ENGINEERING
OH Development
2025
We develop tools, systems, and workflows that merge artificial intelligence with spatial computing, 3D environments, and physical-digital design. Our focus is on building custom pipelines that transform data into interactive, visual, and functional outputs — from 3D mesh analysis and simulation to immersive, AI-driven experiences. This work bridges creative direction and engineering, enabling applications across fashion, product design, and virtual environments.



Scanalyzer

Scanalyzer is a Streamlit-based web tool for 3D mesh inspection, simplification, and analysis. It enables users to upload or test with example meshes and gain insights into geometry features, curvature, thickness, and more — with integrated machine learning predictions.

Features

  • 3D Viewer for interactive mesh inspection
  • Geometry Analysis: surface area, volume, edge lengths, triangle quality
  • Curvature & Thickness Estimation
  • ML-powered Simplification Suggestions
  • Low-poly mesh generation (Mild, Medium, Aggressive)
  • Example mesh support for instant demo (.ply format)











Robotic Sorter – Unity ML-Agents Project
This project trains a simple robotic agent to move toward a target using Unity's ML-Agents toolkit.

Overview

  • Environment: Unity 3D simulation with Rigidbody-based agent physics
  • Agent: SorterAgent.cs — collects relative position + velocity, outputs continuous movement
  • Training Framework: ML-Agents (v0.30.0), PyTorch backend
  • Training Output: Trained .onnx model for inference inside Unity
  • Behavior: Agent learns to reach a target while avoiding falling or drifting inefficiently
Files
  • config/robotic_sorter.yaml: Training config
  • Assets/Scripts/SorterAgent.cs: Core agent logic
  • unity_env/robotic_sorter_sim/: Complete Unity environment with scene, materials, prefabs
  • models/Sorter_run_03/: Output folder with trained ONNX model
Usage
  1. Open Unity project from unity_env/robotic_sorter_sim
  2. Load the RoboticSorter scene
  3. Drag trained .onnx model into the Behavior Parameters > Model field
  4. Hit Play to observe agent inference









Unity ML-Agents project for training a robotic agent to visually identify and sort 3D objects. Combines reinforcement learning (PPO) with a Unity simulation environment using custom C# behavior scripts and Python-based training pipelines.









Multi Agent Planner

This project implements a multi-agent architecture using GPT-based agents, where each role has a specific function. A Planner Agent breaks down tasks, and a Developer Agent writes code to solve each subtask. The system is designed for extensibility and will evolve to include QA and Critic agents.

The result: a seamless translation from digital to physical. Campaigns, installations, and product launches are visualized holistically from the first stages of development, ensuring every output feels interconnected Example Workflow
Example user prompt:
"Create a command-line tool that parses a CSV file and returns JSON-formatted summary statistics."

System output:
  • Planner identifies subtasks
  • Developer writes code for each subtask
  • QA Agent evaluates execution and correctness
  • Critic Agent suggests refinements for failed code
  • Developer revises and retries failed tasks
  • Assembler generates a clean, deduplicated final program
and intentional.










Tools, Languages, and Future Development

Current AI projects are developed primarily in Python, leveraging libraries such as PyTorch, TensorFlow, Scikit-learn, and OpenCV. Interactive simulations are built in Unity with ML-Agents, while data handling, model training, and evaluation are managed through Jupyter and Conda-based environments. Version control is maintained through Git and GitHub.



Future work will focus on advancing multi-agent systems, incorporating reinforcement learning for more adaptive behaviors, and integrating real-time computer vision workflows. The aim is to expand these pipelines into versatile, production-ready tools that can power both technical applications and creative outputs—pushing the boundaries of how AI can shape design, interaction, and immersive experiences.