Streamline Hatching: A Programmer's Attempt at Computational Drawing
December 26th, 2025
Streamline hatching is a technique for transforming images into something that looks hand-sketched using structural tensors and edge-tangent flows
I’ve always wanted to be able to draw. I like the idea of taking an image in my mind and making it so everyone else can see what I see. And I like the creative aspect of it. Being able to draw whatever I want, however I want. But I can’t draw. Not yet, at least. I’m working on it though. I try to doodle while I watch TV to build my confidence. I try to sketch thing I’m bad at, like clothes, faces, people to get better at proportions. But I struggle particularly with different styles of drawing.
This is a guide to one particular technique called streamline hatching where hatching is a pen & paper approach for shading an image where strokes following the form of objects, density varies with tone, crosshatching building up in shadows, etc. More importantly, you’ll understand why each piece of the pipeline exists and what happens when you change it.
What We're Building Toward🔗
Hatching is a drawing technique where artists use parallel lines to create tone and texture. The direction of the strokes typically follows the form of what’s being drawn, like curving around a sphere, running along a cylinder, radiating from a corner etc. The spacing between strokes controls darkness, so closer strokes appear darker which means a darker shade like a shadow, and further apart strokes for highlights.
Cross-hatching adds a second layer of strokes at an angle to the first, building up darker tones in shadow regions.
When hatching is done well, hatching reveals what shape something is as well as how dark it is.
My goal was to replicate this. Given an input image, I wanted to:
- Understand the directional structure of the image. where are the forms, and which way do they “flow”?
- Generate strokes that follow this structure
- Control density and placement so that dark areas get more strokes than light areas
- Optionally add cross-hatching in the darkest regions
The output should look like something a human might have drawn.
The Pipeline at a Glance🔗
Here’s the full pipeline:
- Input Image
- Grayscale Conversion
- Structure Tensor Computation (at multiple scales)
- Orientation & Coherence Fields
- Edge Tangent Flow (ETF) Refinement
- Direction Propagation
- Stroke Generation
- Rendering
Here’s the working implementation. Try it with the preset shapes or upload your own image. Experiment with the parameters and watch how the output changes. The rest of this post explains what’s happening under the hood.
See the Pen Hatching - WebGPU by Douglas Fenstermacher (@dfens) on CodePen.
The Parameter Space🔗
The full system has many parameters. Here they are, grouped by the aspect of the outcome they effect:
Analysis parameters (affect how structure is perceived):
sigmaarray: Scales for multi-scale analysisscaleCombination: How to combine scales (‘eigenvalue’ or ‘gradient’)fineScaleBias: Preference for fine vs. coarse detailminCoherenceThreshold: Below this, orientation is unreliable
Flow field parameters (affect smoothness and propagation):
etfIterations: How many refinement passesetfKernelSize: Size of the refinement neighborhoodpropagationIterations: How far direction spreads into flat regions
Stroke parameters (affect individual strokes):
strokeCount: How many strokes to attemptminLength,maxLength: Stroke length rangestrokeWidth,widthVariation: AppearancestrokeJitter: Hand-drawn wobble
Placement parameters (affect overall coverage):
toneInfluence: Weight darkness vs. uniform coverageminSpacing: Minimum distance between stroke seedsseedRandomness: Jitter in seed grid
Tone parameters (affect light/dark handling):
backgroundBrightnessThreshold: What counts as “background”lowGradientThreshold: What counts as “no structure”crossHatch: Enable perpendicular strokescrossHatchThreshold: Darkness level for cross-hatching
Presets as Starting Points🔗
The implementation includes several presets that configure these parameters for different effects:
Sketch: Loose, gestural strokes
- Fewer, longer strokes
- Higher width variation
- Lower opacity
Engraving: Dense, controlled lines
- Many short strokes
- Low width variation
- High tone influence
- Cross-hatching enabled
Crosshatch: Emphasis on tonal building
- Medium density
- Strong cross-hatching in shadows
Simple Shapes: For geometric forms
- Single scale (coarse)
- High propagation to fill flat regions
Computational Considerations🔗
The implementation supports both CPU and WebGPU backends. The core algorithms are the same; only the tensor operations differ.
For CPU:
- Structure tensor computation:
O(width × height × scales × kernel_size^2) - ETF refinement:
O(width × height × iterations × kernel_size^2) - Stroke generation:
O(stroke_count × max_length)
For interactive use on typical images (500×500), CPU processing takes 1-3 seconds on a modern browser. WebGPU reduces this substantially for the tensor operations but has overhead for the stroke generation phase which remains CPU-bound.
Further Reading🔗
For you want to look into the underlying techniques:
- Structure Tensor: Förstner, W., & Gülch, E. (1987). “A fast operator for detection and precise location of distinct points, corners and centres of circular features.” Search for “structure tensor image processing” for accessible tutorials.
- Edge Tangent Flow: Kang, H., Lee, S., & Chui, C. K. (2007). “Coherent Line Drawing.” This paper introduced the ETF refinement technique.
- Streamline Visualization: Search “streamline integration visualization” for the fluid dynamics perspective on field-following curves.
- Non-Photorealistic Rendering: The field of NPR has extensive literature on computational illustration techniques. Gooch & Gooch’s book “Non-Photorealistic Rendering” is a comprehensive introduction.
The complete implementation, including an interactive demo, is available on CodePen. Try it with your own images and experiment with the parameters. the best way to understand the system is to see how each setting affects the output.