OH Development
2025
Scanalyzer is a Streamlit-based web tool for 3D mesh inspection, simplification, and analysis. It enables users to upload or test with example meshes and gain insights into geometry features, curvature, thickness, and more — with integrated machine learning predictions.
Features
- 3D Viewer for interactive mesh inspection
- Geometry Analysis: surface area, volume, edge lengths, triangle quality
- Curvature & Thickness Estimation
- ML-powered Simplification Suggestions
- Low-poly mesh generation (Mild, Medium, Aggressive)
- Example mesh support for instant demo (.ply format)
This project trains a simple robotic agent to move toward a target using Unity's ML-Agents toolkit.
Overview
- Environment: Unity 3D simulation with Rigidbody-based agent physics
- Agent:
SorterAgent.cs
— collects relative position + velocity, outputs continuous movement - Training Framework: ML-Agents (v0.30.0), PyTorch backend
- Training Output: Trained
.onnx
model for inference inside Unity - Behavior: Agent learns to reach a target while avoiding falling or drifting inefficiently
config/robotic_sorter.yaml
: Training configAssets/Scripts/SorterAgent.cs
: Core agent logicunity_env/robotic_sorter_sim/
: Complete Unity environment with scene, materials, prefabsmodels/Sorter_run_03/
: Output folder with trained ONNX model
- Open Unity project from
unity_env/robotic_sorter_sim
- Load the
RoboticSorter
scene - Drag trained
.onnx
model into theBehavior Parameters
> Model
field - Hit Play to observe agent inference
Unity ML-Agents project for training a robotic agent to visually identify and sort 3D objects. Combines reinforcement learning (PPO) with a Unity simulation environment using custom C# behavior scripts and Python-based training pipelines.
This project implements a multi-agent architecture using GPT-based agents, where each role has a specific function. A Planner Agent breaks down tasks, and a Developer Agent writes code to solve each subtask. The system is designed for extensibility and will evolve to include QA and Critic agents.
The result: a seamless translation from digital to physical. Campaigns, installations, and product launches are visualized holistically from the first stages of development, ensuring every output feels interconnected Example Workflow
Example user prompt:
"Create a command-line tool that parses a CSV file and returns JSON-formatted summary statistics."
System output:
- Planner identifies subtasks
- Developer writes code for each subtask
- QA Agent evaluates execution and correctness
- Critic Agent suggests refinements for failed code
- Developer revises and retries failed tasks
- Assembler generates a clean, deduplicated final program
Tools, Languages, and Future Development
Current AI projects are developed primarily in Python, leveraging libraries such as PyTorch, TensorFlow, Scikit-learn, and OpenCV. Interactive simulations are built in Unity with ML-Agents, while data handling, model training, and evaluation are managed through Jupyter and Conda-based environments. Version control is maintained through Git and GitHub.
Current AI projects are developed primarily in Python, leveraging libraries such as PyTorch, TensorFlow, Scikit-learn, and OpenCV. Interactive simulations are built in Unity with ML-Agents, while data handling, model training, and evaluation are managed through Jupyter and Conda-based environments. Version control is maintained through Git and GitHub.
Future work will focus on advancing multi-agent systems, incorporating reinforcement learning for more adaptive behaviors, and integrating real-time computer vision workflows. The aim is to expand these pipelines into versatile, production-ready tools that can power both technical applications and creative outputs—pushing the boundaries of how AI can shape design, interaction, and immersive experiences.