Designing Knitted Illusions

One of the great joys of being a researcher is observing an interesting phenomenon and deciding that you want to understand it and recreate it. Then there’s the additional joy of being a knitting researcher, and having that phenomenon be a cool object you can also display in your house or wear!

In the vein of spelunking for natural phenomena, one of my advisors, Zachary Tatlock, discovered the website WoollyThoughts from English mathematicians Pat Ashforth and Steve Plummer, a comprehensive craft resource and gallery dedicated to the art of illusion knitting (and shadow knitting, which I won’t get into).1

1 The earliest mention I’ve seen of illusion knitting is actually in Nihon Vogue 1982, but WoollyThoughts has probably the most comprehensive cache of English-language illusion information.

Picture of the WoollyThoughts Illusion Knitting webpage, subpage Illusion Art. A grid of illusion knit examples is displayed, all at angles that reveal the illusion.

Illusion knitting produces knit planes that, when viewed head-on, appear to be meaningless and shapeless gray rectangles, perhaps hung up for soundproofing or to cover wall damage after an ill-advised evening of imbitition. Walk around the mounted image, however, and magic happens: an image emerges from the noise.

An animation of an orthographic Menger sponge illusion. The knit object is slowly rotated from a steep angle on the right side, to looking head-on, to a steep angle on the right side. Head-on, the image is invisible. The image uses three different colour values (lightest, medium, darkest) to depict the sponge.

If you’re as craft-inclined as I am, you might be wondering how these objects are designed and fabricated. Luckily, WoollyThoughts has a couple tutorials on doing exactly that. If we give them a read, we see that… well, the process is importing your input into Excel and meticulously colouring in your design cell-by-cell, with a workflow that involves completing small sections at a time and knitting them up and adjusting accordingly. Large designs apparently take hundreds of hours to plot and yet hundreds more to knit up.

A grayscale picture of a Tiger's head.

The same image, but rotated 90 degrees right and zoomed in on the eye. A small black grid has been overlaid on the image, with larger red grids dividing up 10x10 sections of the small black grid. Several of the grid cells over the Tiger's eye have been filled in with orange.

Above is a picture of a Tiger, zoomed into the eye level detail, and with a grid overlaid to make designing the patterns easier. Pat generates the patterns by placing orange tiles and black tiles down, which correspond to what is seen in the angled view of the knitted object. Further detail on the manual design process is in this tutorial.

Same as above, but almost the whole image is filled in with orange or black cells, with each row having either only orange and unshaded cells, or black and unshaded cells.

Now here’s the final design for the zoomed-in eye section. Pat then knits up this section to see if any details need to be adjusted, seen below.

A small knitted swatch of the pattern that is designed. The swatch is knit with orange and black yarn and is viewed at an angle, revealing a portion of the Tiger's face.

So right off the bat, this Excel-cell-placement task seems like a great candidate for automation!

Another aspect of the website caught my interest while I was scrolling around. They’d made this example with Magritte’s Treachery of Images, where it seemed at first that the image of a pipe was viewable from one side, and the text “This is not a pipe” was viewable from the other. After some rigorous gif-watching investigation, it turned out these were actually two separate knitted objects put together in one animation. But it got us thinking – would it be possible to create illusions with two different image effects embedded in one object?

An illusion viewed from the right, with a black background and white text reading "This is not a pipe". An illusion viewed from the left, with a white background and a simple image of the same smoking pipe pictured in the Treachery of Images.

To rehash our journey thus far, we want to find ways to make the design and fabrication of illusion knits computationally-assisted, and find a way to expand them to double-view images. For double-view images, we care that one image is obvious in the left view, and a different image is obvious in the right view (and discard the head-on view entirely so we have more degrees of freedom). In this blog post, I’ll focus on the design aspect. However, we also needed to examine the physical knitted objects to understand how it was possible to consistently manufacture the desired patterns, and make contributions to knitting machine scheduling to actually fabricate these objects. I’ll save the nitty-gritty of that for another time!

So let’s get started on our monomyth and delve into the secrets of illusion knitting. Our first step bringing us into the underworld is asking: how does illusion knitting actually work? It relies on a simple principle that doesn’t require any knitting background to understand. Some rows of stitches are taller than others, so that when they’re viewed at an angle, the tall ones (purple in the image below) block shorter rows (orange) directly behind them.

A diagram showing how the occlusion effect works. On the bottom, there is a flat orange rectangle next to a taller purple rectangle next to a flat orange rectangle, representing stitches. An eye is situated at the top left corner, looking down at the row of stitches. Two rays are shooting out parallel from the eye, one hitting the corner of the orange and purple rectangles and seeing both, the other hitting the top face of the purple rectangle, and thus blocking the orange rectangle behind it.

Take the example of a circle with a white foreground and black background.

A small pixel-art image of a white circle on a black background.

Let’s turn this into an illusion knit so that the circle appears in the side view. We need the head-on view to be a nondescript image that conceals the surprise, and because the height information is flattened when viewed head-on, the colour map completely determines what is visible. So, set the colour map to be stripes. Then, to modify what we see at an angle, we need to manipulate the height map! At each point inside the circle, we should raise the stitches that correspond to the foreground colour. At each point outside the circle, we should raise the stitches corresponding to the background colour.

A top-down view of the above image turned into an illusion knit. The knit object is made of alternating black and white stripes. Even in the head-on view, you can see the outline of the circle, because the raised white stitches inside the circle are hitting the light, and the flat white stitches outside the circle are in shadow. The same knitted piece when viewed at an angle. Now the circle is plainly visible as a filled-in white shape with a black background.

Now, you might ask what it means to raise a stitch or what physical operation this corresponds to. Great question! In our work, we characterize something new that we call “microgeometry”: the small-scale geometry on the surface of a knitted piece. We introduce a layer of abstraction over the microgeometry to use when designing illusion knitting patterns. This layer uses “logical” units BUMP, which correspond to raising the knit surface, and FLAT, which correspond to leaving the knit surface level, and we assume it’s always possible to manufacture the microgeometry associated with these units. Speaking in terms of knitting, a BUMP is a knit with a purl in the next row directly below it, and FLAT is a knit with a knit in the next row. In particular, in our work, the microgeometry is such that each of these BUMP units is the same height, rectangular (so the top face of the microgeometry is always visible), and one solid colour throughout.

I specifically mention these physical attributes because when I was in the abyssal death and rebirth phase of the hero’s journey, wracking my brain to make double-view illusions actually possible, I went to a LEGO convention with some friends and was totally gobsmacked to see that someone had already created a simple but effective illusion object. They had probably had some extremely obvious insight that I had been foolishly blind to and precious months of my life were jettisoned like so much kitchen scrap into the garburator. On closer examination, the LEGO piece used a series of row-aligned pyramidals units such that the left side of each pyramid was completely obscured when viewed from the right, and thus each input image was sliced into rows and stuck on their respective side of the pyramid. As each knitted bump is made with a single yarn colour, this technique doesn’t apply in knitting, and my work (and stable footing in this plane of existence) is safe.2

A picture of a large flat sheet made of Lego. Two portraits of Daphne from Scooby Doo and Azula from Avatar: The Last Airbender are interleaved such that 2nth row is coloured to look like the nth row of Daphne's picture, and the 2n+1th row is coloured to look like the nth row of Azula's picture. A close-up of the above image.

Daphne/Azula illusion made with Lego.

A diagram of two isoceles triangles that are matched on their side edge to form one roughly equilateral triangle. The left-side isoceles triangle is yellow, and the right-side isoceles triangle is orange.

An artist’s interpretation of how the Lego illusion is constructed.

2 It is technically possible, but the effect is not very precise. Rather than seeing yellow on one side and orange on the other, you’d see about two-thirds yellow and one-third orange on one side, and one-third yellow and two-thirds orange on the other side. Incorporating this style of stitch could be an interesting extension.

Now we have an intuitive understanding of how illusions are created. But what does it mean for the knitted illusion to be correct? Well, we can formalize this notion of a “correct” illusion simply and algorithmically by predicting what the final knitted object will look like and ensuring that it matches the input object. In this model, we assume that the illusion is viewable when you’re looking down the rows (or courses), i.e. a stitch in one row blocks the stitch in the row beyond it.

For each row, where C is the colour map (a 1D array of integers) and R is the raisedness map (a 1D array of booleans), and i corresponds to the column number within the row, we have:

\((C_i = input_i \wedge \neg R_{i-1}) \vee (C_{i-1} = input_i \wedge R_{i-1})\)

Great news – these constraints are basically enough to guarantee that for a two-colour single-view image, we can generate a pattern satisfying all the constraints. Going beyond two colours, you can approximate a three-colour image by making a middle colour with your two colours striped. We did this with a picture of the Mona Lisa. It took less than a minute to generate this illusion design, and most of the time was resizing the image and loading it into our tool.

A triptych showing our Mona Lisa illusion from the head on, from a slight angle, and from an extreme angle. The front view obscures the image somewhat and it is not as distinct. The slight angle view has more contrast. The extreme angle view has more distinct areas, sharper white areas and darker black areas for even more contrast.

With these constraints, we can turn back to our double-view problem. We have a perfectly good set of constraints that are generated for a set of input objects; why not try to generate those constraints for both sides and throw the thing into a solver? Let’s give each of these pixels an equal weight and formulate the problem as MaxSAT, where we try to satisfy the most constraints.

So, we wrote a program that took two images, generated constraints such that the left side appears to be the first input, and the right side appears to be the second input. If we just ask the SAT solver to synthesize the solution, we get this:

Four images. At the bottom are the predictions for the what the pattern will look like from the left and right. On the left is a bunny, which is solid white where the teapot and bunny overlap, mostly white where the bunny is and the teapot isn't, and somewhat white where the teapot is and the bunny isn't. On the right is the same but reversed, where the teapot is mostly visible, but ghost fills of where the bunny is and the teapot isn't are still visible. On the top row are two pictures of the knitted illusions corresponding to the MaxSAT pattern. The knitted illusion still resembles the prediction but is worse-quality, as the areas where the other-side image should be hidden are still quite bright.

The predicted result isn’t too bad. In fact, it’s optimal. MaxSAT shows us that for two inputs that completely differ, we can do at best 75% matching in one view and 75% in the other. In reality though, I’m not pleased with what we were able to do with MaxSAT. The artifacts are still very obvious when we actually look! Also, the resulting pattern takes about 12 hours to knit on the knitting machine, which is pretty slow. It’s also such a large pattern that we have to divide it into 8 pieces for each piece to fit in the machine memory.

I’d like to do better, perceptually speaking. Satisfying the constraints as given seem hopeless – as currently formulated, we can’t do better than satisfying 75% of them. There’s just no way any two arbitrary images are satisfiable. At this point, let’s take a step back and examine the kinds of two-view effects that are actually possible.

The simplest definition of a two-view effect would be some colour, colour A visible from the left, and another colour, B, visible from the right. Let’s say A is white and B is black. We already know that having white and black opposed is impossible. To get an occlusion effect, some stitches have to be raised, but all raised stitches are visible from both sides.

First off, notice that we can get an extra degree of freedom by having two flat rows for every bump row. In the left view below, half the flat rows are obscured and half are visible; in the right view, the exact opposite set is visible and obscured.

The top of the diagram shows the side view of a pattern: black BUMP, blue FLAT, white FLAT, black BUMP, blue FLAT, white FLAT. The bottom shows the predicted view from the left or right side. On the left side, we see two black rows and one white row. On the right side, we see a black row, a blue row, two black rows, a blue row, and a black row.

What if, instead, we asked for white to be visible from the left, and a striped pattern of white and black to be visible from the right? Then, we end up with the following result (one knit object viewed from opposite sides):

Two images of the same knitted object viewed from opposite sides. In the left image, we see a white square on a black-and-white striped background, giving a gray impression. In the right, the background is fully white, and the inner square is black-and-white striped.

The stripe of black and white recalls the use of stripes in single-view illusions to create a middle gray colour.

Now we’ve successfully created a two-view effect!

So, here’s a new idea for creating illusions from two arbitrary input images that is perceptually better than just minimizing constraint violations. We can probably replace some of the input’s original colours with striped “fills”, while keeping the images recognizable. In other words, we’re systematically relaxing the constraints (by modifying the input images) until an illusion becomes realizable.

In this workflow, we accept pre-quantized images (such that a user is free to use their favourite image quantization software), “recolour” them, and generate a new pattern thusly:

A workflow diagram. On the very left are two square inputs, with Left input two filled-in black squiggles bordering each side, and Right input underneath it as one big closed squiggle in the middle. There is one arrow from each input to the next phase, captioned "Generated image 'overlaps'". This phase turns the black areas in the Left input into a transparent green, and Right input to a transparent blue, then overlays them into one square. Each region of colour is now assigned a label; for example, where the blue and green overlap, the label is B|B, meaning black on the left input and black on the right input. This phase is then fed into the next phase, "Tile bank", which is a diagram of different types of patterns and their predicted left and right outputs. After going through the tile bank, the previously-white parts of each input image have been replaced by black-and-white vertical stripes, shown underneath the tile bank diagram. Finally, on the rightmost side, we show in one square the final pattern generated, which has a solid black where both images overlap, a plain stripe where they do not, and different stripes for B|W and W|B.

Using the stripes as guidance3, we have a tile template bank that we can draw from to automatically finds viable new assignments of input colours to these pattern “fills” thusly:

  1. Overlap the two images so that for each pixel, we know what colour it should be on the right and left
  2. Collect all unique colour overlaps (e.g. white | black, black | black, white | white, black | white)
  3. Go through each input colour and try to assign a new fill type (solid or striped of a specific colour)
  4. Check all overlaps: given my new recolouring, do I have a possible tile for each overlap? If not, this assignment isn’t viable.
  5. Finally, do a lightness order check

The key to this setup, perceptually, is that we have multiple tiles that can map to a specific fill on one side. For example, to see black-and-white stripe in the left view, we could use the tile above, or we could simply use an all-flat striped tile. This flexibility will let us “hide” the other-view images better, since we have a way to keep the fill the same in one view even as it changes in the other view! Speaking in terms of the monomyth, this is the transformation that will allow us to return to the known world.

3 How did we determine the templates for the tile template bank? Mostly simplicity and fabricability. You could design more complex tiles, e.g. checkerboard instead of stripes, but it would take longer to knit. Also, stripes have established usage in traditional illusion knitting designs.

Let’s look at some examples that we made using this approach.

Four examples of successful illusion knits. In the top left is an EKG pulse (which resembles the PLSE logo) and a heart, both black on a white background. We show the input images as well as the predictions from the pattern. The knitted version is done in purple and white. Below that is the SIGGRAPH and SIGGRAPH Asia logos, which have the same shape (two swoops suggesting two hemispheres), except the SIGGRAPH logo on the left is half red and half blue, and the Asia logo on the right is all orange. On the top right is an illusion displaying the words ILLUSION from one side and KNITTING from the other side in black and white. On the bottom right is an example with Van Gogh's Self-Portrait and Sunflowers that uses three yarn colours. We show the input images, the predictions (using blue and yellow), and the knitted object, which is made in yellow, light blue, and dark blue. Three images of the same knit object viewed from the left side, the right side, and head-on. The left side shows a simplified contour of the Stanford bunny. The head-on view reveals part of both images. The right side shows the Utah teapot.

I’d like to note that in our framework, knitting long rows of stripes avoids within-row colour changes, and works up relatively quickly. The bunny/teapot example we have here only took about two hours to knit, compared to MaxSAT’s 12.

The other main benefit of our new system is that it’s easy and possible to directly manipulate the outcome! Oftentimes, in fully-automated approaches, it’s not clear how to edit the inputs or tweak the parameters so you get exactly what you want. Here, you can simply edit the input images, or modify the tiles. For example, for the Van Gogh Self Portrait/Sunflowers example, the quantized input had a lot of noise around his head that made the image much noisier. I simply erased that detail in MS Paint to get a cleaner result.

Three images. The first is of the normal, full-colour, full-detail Self-Portrait input. The second has been quantized into just white, gray, and black. The third has had detail on the wall removed so that Van Gogh's portrait is the only thing in the middle of the image.

You could also experiment with different quantization methods. Below, we tried quantizing the Scream (downloaded from Wikipedia) four ways. (a) shows what happens when you use our default quantization method to process the image. (b) was processed with Adobe Photoshop’s posterize filter, (c) was hand-drawn in just three colours, and (d) was generated via stable diffusion.

Four versions of the Scream. The top row has the coloured inputs, the bottom row has their grayscale outputs. The leftmost column has a lot of dithering and is somewhat grainy. The next one over has more contrast and much of the sky detail has gone away. The third is simpler and recreated with round brush strokes. The last one is crisp and has exaggerated detail and clear lines in the landscape.

As we return to the overworld, I’d summarize our journey thusly: we accelerated the design of single-view illusions, and developed a way to create the first-known double-view illusion knitting patterns! We did so by analyzing the constraints of knitted illusions, which depended on the physical attributes of knitting and by fabrication, discovering they were wildly overconstrained, and creating a system for relaxing those constraints with direct control. These advances allowed us to generate lots of cool illusions.

As a treat for reading my blog post, I wanted to share a couple other ideas that I worked on that ultimately weren’t fully-formed enough for me to disseminate anywhere else.

First, I tried to design this three-way illusion with one image from the left, one image from the right, and one image head-on. Because of the overconstrained nature of left/right, the head-on view is even more constrained, and can’t diverge from the intersection of the side view images.

The left view. There is a striped background, with a square inside it (square A) that is filled with a similar stripe, but offset. Finally, there is a small black square (square B) inside that square. An input image and head-on view. The background is striped. There is a square (square A) inside it, which is striped, but the stripes are offset. There is a second square (square B) inside the first, with has a third stripe pattern. Finally, inside that square is a black triangle. The right view. The background is striped, with a black square (square A) inside it.

I think it should be possible to create some interesting three-way illusions, but we’ll need to create better design guidance to make interesting examples. There might be a more relaxed constraint than the one I identified (the head-on view being within the intersection of the two side views), for example.

Second, I was very interested in making “animations” with illusion knitting. Like, if we put a knitted surface on a cylinder, we’d only be able to see so much of it at a time. What if we could walk around a knitted cylinder and see an “animation”? To do this, we’d create an object with multiple “frames” in the head-on view and the side view, and distribute these frames in space. Here’s an example:

A knit object from a head-on view depicting three white shapes on a purple-and-white striped background, placed side-by-side. The first shape is a circle. The second shape is the top and right leg of a star. The last shape is the top, left, right, and bottom left legs of a star. The same knit object, viewed from the right side. The leftmost object is now the middle and top leg of the star. The middle object is now the top, left, and right legs of the star. The right object is now all five legs of the star, with the last one still rather faint as it is closest to the camera. The background is purple.

The star is “growing” – it starts out small in the head-on view, and grows in the side view, and then you walk to the next star over and its head-on view is even larger.

A gif showing the previous knit "animated". It is wrapped around a large cylinder and slowly spun from the leftmost to the rightmost frame.

In general, I wasn’t too happy with the effect of the illusion, as you can only have a limited number of frames, and I haven’t yet discovered an animation that really makes the medium pop. It’s possible that I need a different setup for capturing the animation, too – something more akin to a zoetrope and a faster spinning mechanism maybe?

Third, I was interested in how draping the flexible surface over an irregular or hard surface would affect the results, similar to how the Lego illusion used triangles to get an occlusion effect. I haven’t tried making anything in this style yet, so if you think about it, why not reach out and let me know what ideas you have!

Acknowledgements. This work couldn’t have happened without my fellow experts, Yuxuan Mei and Ben Jones, and my advisors Zachary Tatlock and Adriana Schulz. Also shout-out to my solvers class group project partners, Raymond Guo, Kaiyang Wu, and Jimmy Cheong, as we worked on the constraints and MaxSAT formulation together. Finally, thank you so much to Sorawee for editing this blog post and for all your valuable feedback!