A whiteboard that draws for you

Matt Legrand, Creator, Draw Cafe
October 29, 20256 min read

I wasn’t happy with existing drawing tools, so I made one myself

Digital whiteboarding and sketching tools are prominent in my workflow. It’s the fastest way to explore and express ideas while generating artifacts that transmute easily into higher fidelity. But every time I sketch a user flow, I end up manually sketching out UI primitives that are purely placeholders. This repetitive process can be time-consuming and frustrating.

Tools like Excalidraw are great at what they do, but they have a “blank canvas” problem (literally). It’s easy to get hung up on generating believable details in the midst of of some higher-level design decisions.

So I built draw.cafe. It’s a drawing tool that lets you describe what you want in plain English and generates a complete, editable layout on the canvas.

You can generate “fat-marker” wireframes, like a dashboard with KPI cards and a line chart, with natural language. The AI handles the placement and sizing, giving you a real starting point you can refine with draw.cafe’s other drawing tools.

Draw Cafe has AI that that understands layouts

Draw.cafe uses a semantic generation engine. Typing “create a dashboard with three KPI cards and a bar chart” generates the entire layout: properly sized cards, labeled axes, realistic data, appropriate spacing, and visual hierarchy. These are fully editable canvas elements; rectangles, text, lines, and so on, that you can move, resize, style, and refine just like you’d drawn them manually.

Draw.cafe understands different interface patterns. For example: dashboards, mobile apps, data visualizations, org charts, timelines, flowcharts, wireframes. Each pattern has its own visual conventions, and the system applies them automatically. A mobile app gets navigation patterns and touch-friendly sizing. A scatter plot gets labeled axes, a legend, and proper margins. A timeline gets chronological flow and clear milestones. The engine uses a multi-stage pipeline that detects interface types, injects contextual guidance, enforces design principles, and validates output before it hits the canvas.

Examples of what Draw Cafe can generate

Making language models understand visual design

Getting AI to generate useful layouts turned out to be the hardest part of this project. Language models are great at text, typically bad at spatial reasoning and visual hierarchy.

The canvas was purpose-built to mitigate this. It stores elements as json objects with properties like type, position, size, color, and content. This allows the system to reason about layout and style in a structured way.

json
{
  "elements": [
    {
      "id": "element_abc123",
      "type": "rectangle",
      "x": 100,
      "y": 150,
      "width": 200,
      "height": 100,
      "angle": 0,
      "opacity": 1,
      "style": {
        "strokeColor": "#000000",
        "backgroundColor": "#ffffff",
        "fillStyle": "solid",
        "strokeWidth": 2,
        "strokeStyle": "solid",
        "roughness": 1,
        "opacity": 1
      }
    }

Asking draw.cafe to draw a “dashboard with metrics” processes the prompt through a semantic generation pipeline. First, the user draws an area on the canvas where the dashboard will be placed. This opens the prompt input, so we start off with size & position data to accompany the desired wireframe. This actually informs layout and design decisions; draw.cafe will generate very different results for a dropdown-sized design area vs. a full-screen desktop area.

Interface Type Detection

Next, the system analyzes your request to identify what you’re trying to create. “Dashboard with metrics” triggers different generation logic than “mobile app login screen” or “scatter plot showing sales data.” The system recognizes distinct interface patterns, each with its own visual conventions.

Contextual Prompt Enhancement

Draw.cafe injects specific guidance for the interface type. Data visualizations get instructions about chart areas, labeled axes, legends, grid lines, and margins. Mobile interfaces get guidance about navigation patterns, content hierarchy, and touch target sizing. Dashboards get rules about card layouts and KPI presentation.

A simple user request like “create a bar chart” becomes a detailed prompt that includes:

  • Canvas dimensions and target area
  • Chart-specific requirements (axes, labels, data points, legend)
  • Design principles (spacing, hierarchy, balance)
  • Suggested colors based on semantic meaning (e.g., red for danger, green for success)
  • Instructions to use realistic content instead of “Lorem ipsum”

Design Principle Enforcement

Every prompt includes fundamental design principles: proper spacing, visual hierarchy, balanced layouts, and consistent color usage.

Output Validation and Sanitization

Language models are probabilistic. They hallucinate invalid colors, create elements outside canvas bounds, or generate overlapping shapes. The validation pipeline catches these issues:

  • Schema validation ensures structural correctness
  • Bounds checking keeps elements within canvas dimensions
  • Minimum size enforcement prevents invisible elements
  • Overlap detection identifies and adjusts problematic positioning
  • Color validation ensures only valid hex codes

This multi-stage pipeline transforms raw LLM output into clean, usable layouts. The difference between “random shapes” and “useful starting point” is entirely in the prompt engineering.

It’s also a fantastic drawing tool

Generative wireframing is the headline feature, but draw.cafe also includes a robust set of design tools. I’ve made opinionated choices about grouping and stacking behaviors, some informed by my own muscle memory with other design tools.

It’s designed for speed and efficiency; in particular, batch changes for color and text properties.

Draw Cafe performs well with even with large canvases

Here’s how Draw stays responsive:

  • Spatial Indexing Fast element lookup using spatial data structures instead of linear search. When you click on the canvas, the system doesn’t check every element—it only checks elements in that spatial region.
  • Update Throttling React re-renders throttled to 60fps during rapid interactions like dragging. No excessive re-renders, no janky performance.
  • Lazy Loading On-demand component loading reduces initial bundle size. You don’t load the AI generation code until you actually use the AI tool.
  • Edge Runtime Optimization API routes optimized for Vercel Edge runtime minimize latency. Faster responses, better user experience.
  • Memoization Unchanged elements don’t re-render. Only the elements you’re actively manipulating get updated.

Result: smooth interactions even with complex canvases containing hundreds of elements.

What’s next

  • Multi-Pass Generation Right now, generation is single-shot: one prompt, one response. Staged generation with content analysis → layout planning → element generation could improve quality through iterative refinement.
  • Design System Integration Incorporating formal design systems with grid constraints, typography scales, and spacing rules would improve consistency and professional quality. Think Tailwind-style constraints baked into the generation logic.
  • Real-Time Collaboration Multiple simultaneous editors, Figma-style. The infrastructure is there (PostgreSQL, auto-save), just needs WebSocket coordination.

Why this matters

I built draw.cafe because I wanted to skip the tedious parts of layout creation. It’s a tool that combines AI generation with traditional design tools in a way that actually saves me time. It also demonstrates that language models can perform acceptably well in spatial relationships and visual hierarchy when given a robust prompt design and information in a structured format.

GenAI doesn’t need to be perfect to be useful. It just needs to be good enough to save time. I typically generate 80% of a drawing automatically, then refine the remaining 20% manually. This is the sweet spot for AI-powered creative tools.