Making Better CNC Halftone Pictures

[Jason] was messing around with CNC machines and came up with his own halftone CNC picture that might be an improvement over previous attempts we’ve seen.

[Jason] was inspired by this Hack a Day post that converted a image halftone like the default Photoshop plugin or the rasterbator. The results were very nice, but once a user on the JoesCNC forum asked how he could make these ‘Mirage’ CNC picture panels, [Jason] knew what he had to do.

He immediately recognized the algorithm that generated the Mirage panels as based on the Gray-Scott reaction-diffusion algorithm. With this algorithm, dark areas look a little like fingerprints, meaning the toolhead of the CNC router can cut on the X and Y axes instead of a simple hole pattern with a traditional halftone. After a little bit of coding, [Jason] had an app that converted an image to a reaction-diffusion halftone which can then be converted to vectors and sent to a router.

It’s a very neat build and we imagine that [Jason]’s pictures would cost a bit less than the commercial panels. Check out the video after the break to see the fabrication process.

[youtube=http://www.youtube.com/watch?v=xoJDTPRqI6o&w=470]

29 thoughts on “Making Better CNC Halftone Pictures

  1. Check out JasonDorie.com. (or just click the first link in the text to go direct to the CNC page). The source isn’t public, but the executable is. The vectorization and conversion to CNC paths was done by commercial software, but I’m hoping to eventually do that part too.

    1. If you scale up the source image (just use Paint) to around 800 or 900 pixels, Reactor will produce an output to match – The patterning will be more dense, and give you better shading. :)

  2. Really nice.

    One of the issues I have adapting traditional halftone designs to my medium is that I basically have a fixed-size brush. (Like a CNC without fine Z-axis control). The algorithm you show looks like it would be possible to do reasonable quality reproductions while accommodating my tool restriction. Does is use constant width lines, or could it be modified to use constant width lines and maintain the effect?

    1. Matt – It sounds like you might be looking for either stippling (which I’m going to add to the Halftone program at some point) or error diffused sampling, which is the more general term. The Reaction program doesn’t used fixed width dots. They’re close, but there’s a decent amount of subtlety in the output shading that translates into varying dot sizes, and it’s kind of important.

      Check out this link for an example of stippling: http://cs.nyu.edu/~ajsecord/stipples.html

      Feel free to email me directly if you have questions – My email is listed at the top of the CNC page on my site.

    2. That said, I use a commercial program to compute the tracing paths, and it can be told to use a fixed-width tool. As long as the tool is small enough to fit the smallest dot it can be made to draw the biggest ones by making multiple passes. Would that work for you?

  3. Very nice program, too bad it’s not open source. I would love to see how you made various things :) ( Mainly the pixel data to sample point intensity calculation and “dark boost”-feature )

    1. I take the image, scale it up 4x original size using a bicubic filter. The bicubic filter maintains the round shape of the blobs, and upscaling it gives the tracer more data to work with. V1.2 of the program has a “Save Huge” button that does this step for you. From there I use a grayscale tracer with the threshold set to 50%.

    1. That depends – Are you working with the Halftone program or the Reactor program? The Halftone one will let you specify the dot size and work area size independently. I generally make prints about 18″ to 24″ across, and use a max dot size of 0.1″. A print of 9″ to 12″ with a max dot size of 0.05″ would produce the same dot pattern, but be physically smaller. The size & resolution you use depends greatly on your intended viewing distance. If you want to look at them from 6 feet away you’ll need a finer dot spacing than if you’re intending to view them from 20 feet.

      For the Reactor program the resolution is entirely based on the resolution of the input image. I typically use 800 to 1200 pixels across.

      1. I was working with reactor.
        Thanks for your response, I was searching by trial and (mostly) error to get my picture to the right resolution.

        The only problem I’ve got atm is that I can convert the image with illustrator (or inkscape) to a vector file and save it as a DXF file, but then I need to convert it to code for my cnc router.

        I’ve heard artcam can export to gcode.
        But my cnc is from the Holzher brand, and I can’t seem to find a program that converts the .Gcode file to .hops.

        I’m trying to write my own software so that it will convert gcode to the necesary code, but it proves to be quite a challenge

  4. btw nice work on the halftoner 1.1.
    I myself got started with metalfusion’s software first, but your adjustable settings which auto preview are just what I was looking for!

    I’ve got a question though:
    – Where to you set the thickness of the plate you will be using?
    – What does point retract do?

    Thanks a lot btw, I’m ver impressed by halftoner v1.1!

    1. The Halftone program doesn’t have a setting for thickness – The code assumes Z-zero is at the top of the work piece and -Z is into the piece. The code will generate plunges as deep as required to produce the correct dot size with the angle of bit specified.

      Point retract specifies how far to lift the tool between neighboring dots. Assuming that your hold downs will only be on the edges, you can set the normal retract height to clear them. The normal retract height will be used to move to the first point, then point retract will be used between points, leading to shorter cut times.

  5. hi jason,
    i’m trying to make a similar image conversion (reactor – like) with gray-scott PDE simulation. yet, somehow my parameters don’t give the nice result your program gives. can you please elaborate on the image preprocessing and the F and k parameters you use?
    thanks

Leave a Reply to Jason DorieCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.