Robot Drawing System
A robot that can draw pictures using a pen
Overview
This project (July 2023 – February 2024) developed a robot that can look at an image and draw it with a pen — not by blindly copying pixels, but by learning to generate artistic strokes and planning an efficient drawing path, much like a human artist would.
How It Works
The system has two main components working together:
Stroke Generation uses deep learning to analyze an input image and produce a sequence of pen strokes that capture its essential features. We explored several approaches including Convolutional Neural Networks (CNNs) for feature extraction, Generative Adversarial Networks (GANs) for producing realistic stroke patterns, and Variational Autoencoders (VAEs) for learning compressed stroke representations. GANs proved most effective at generating strokes that look naturally drawn rather than mechanically traced.
Path Planning takes the generated strokes and figures out the most efficient order and direction for the robot arm to draw them, minimizing unnecessary pen-up movements. We implemented optimization algorithms including Random-Key Genetic Algorithms (RKGA) and Particle Swarm Optimization (PSO) to solve what is essentially a variant of the traveling salesman problem — finding the shortest path through all strokes.
Why It’s Interesting
The challenge isn’t just making a robot hold a pen — it’s making it draw in a way that feels artistic. The system must understand which elements of an image are visually important, decide how to represent them with strokes, and execute the drawing efficiently. This sits at the intersection of computer vision, generative AI, and robotics.
Applications
Beyond entertainment and art exhibitions, the underlying technologies have applications in manufacturing (automated marking and engraving), education (interactive art instruction), and accessibility (assistive drawing tools).
Collaborators
- Cheju Halla University: Lead research institution