Recently I set out with the goal of making flow maps that were dynamically generated within the UDK. This is a breakdown of what I did, and how I did it, though I won't be going into much technical detail. More so the ideas behind it, and the procedure I followed to make it work.
If anyone want's to read more on flow maps, I recommend the following links (what I used) to get a basic understanding of their function and creation:
Phil Johnson's awesome article about how he made them (with walkthrough):
Dmitry Barannik's pdf breakdown of his flow map shader:
The Valve white paper on their flow maps:
For my implimentation, I initially tested with Phil Johnson's approach, but ended with something closer to Dmitry Barannik's material, including some of the specific improvements he implemented.
My basic process was this:
1. Use an extra camera's post effect to grab a height map, offsetting the image in 4 directions to gather height information for nearby height information around each pixel.
2. Separate each directional offset amount into 4 separate channels. (X+,X-,Y+,Y-)
3. Blur those four values separately
4. Plug those 4 values into a basic flow map shader
Capturing height information in UDK:
I wanted to capture 2D height information in the UDK (and change it a bit) before sending it into the flow map. To do this, I needed to store the height information, but for every pixel checked, store 4 values (each one representing one direction on an axis. The reason to keep these separate (for now) is so that I could blur each one separately. The only way to do this in the UDK is to use SceneCapture2D Actors and write to rendertexturetarget2Ds. Unfortunately, I couldn't write all 4 channels, so I had to separate out each axis into it's own pass. That mean's it's own sceneCapture2D actor, and it's own rendertexturetarget2D. I kept both passes separate until I put them back into the flow map shader.
While I could have forced all the information into 1 pass, I ran into depth accuracy issues with texture compression and wanted to gather as high of a range as possible for each axis. For each camera, I plugged in a capture post process chain that featured a special material based post effect. That post effect was made to capture the scene depth. Because I wanted to reuse the same capture post effect twice, I decided to make a texture that could be instanced. Inside that I included multiple material functions. Below is that material.
The tricky part came from reading the actual height in a range that worked for me. For my capture cameras, I had the gradient offset set relatively low (~225), and the offset set very far. Because I wanted something super close to orthographic, my sceneCaptureCameras (rendering as cheaply as possible, unlit, etc) had a field of view of only 25. That meant for my camera to see the entire area of my fluid surface, it had to be very far away (hence the large offset). In the end, what matters is having a value of 1 returned at the top of the height range you want to capture, and a value of 0 at the bottom.
For more information, check out Dave Prout's excellent walk through on looking at scene depth:
Blurring the captured image:
Now that I have full range values of depth values that have been offset, I basically use the same trick to blur each axis pass individually. I made 2 new flat planes and had them render unlit versions of the original texture. I then use a SceneCapture2D actor to capture said images, but instead of a fancy depth post effect, instead I just blur them and write them to new rendertexturetarget2Ds.
What is great about this technique, is that one pixel offset becomes a much softer change in value (creating the gradual change in flow around objects). If you want a bigger influence of each object, use a bigger blur. I ended up using the basic blur post process chain node for this, but feel free to experiment with different results.
One thing that is important, is making sure that the SceneCapture2D cameras for these passes only render what they need to. That means making the render material on the plane unlit. The cameras themselves have to see the world in lit mode to make sure the post effect works, but the near and far plane, shadows, fog, etc can all be turned off.
Now it's time to throw everything together.
The Master Material:
I don't want to go into explicit detail about this material since it is so similar to some of the materials I linked at the start of this post, but I will talk in detail about the basic combination of the 2 RenderTextureTarget2D textures.
Because I want my flow map to support both forward and backward movement, I appended the positive values to each other creating two separate flow maps, one representing the strength of the positive flow, and the other the strength of the negative flow. Finally, I subtract one from the other since negative values are alright (simply reversing the direction).
Overall, this was a blast of a little experiment. The final shader cost wasn't too costly, though accidentally making any of the render textures overly expensive can definitely add to the performance implications, as would making too expensive of a blur.
When it comes to usability, I think something like this could absolutely have use in a real production environment. Any kind of environment where something is moving through a water fluid could happily showcase the effects created here. Because the original capture cameras have clip planes, you could probably configure things to support proper overhangs and only grab height changes within a very small radius.
Everything unfortunately is not 100% perfect with this solution. Below are some good and bad conclusions, as well as some potential future adjustments, and some tips for those doing it on their own.
-Truly appears to be dynamic, visually reacting objects picked up in the height map
-Could be adapted to support overhangs
-Cost not exorbitant, and can clearly be adjusted on a case by case scenario
-Only really worthwhile in a truly dynamic setting. Otherwise why not just bake a regular flow map?
-Minor depth changes create visual artifacts along lines where the flow speed changes
-Overpowered flow intensity can create visual artifacts
-The phase between the two panning textures to cover the pop, even with noise is still visible with the time variable set too high
-Minimal render texture support in UDK post processes makes implementation of this technique unnecessarily difficult
-Debug shader versions are your friend. Test your height capture before making a flow map. Test your flow map before putting in your height capture.
-Make a debug material instance that lets you change the guesswork values quickly (like gradient offset, etc)
-When you unwrap your plane, make sure to do a planar UV capture from above to mimic what the camera sees.
-Try for a field of view in your capture cameras of less than 50. Anything more causes distortions
-Try a low time multiplier value and flow power value to start things off.
-For implementation, it could be worth setting this part of your scene aside away from player view and making proxy objects to match up with the real thing, then copying the material over onto the final water.
Future plans (I have already started on these)
-This works great for costs. I could put in an oscillating sine wave to represent waves in the flow on beaches
-Adding vert painting support to help manually control problem situations
-Adding dynamic tessellation to the water, creating raised areas where there is object interaction
-Reducing cost and difficulty of implementation
Thanks for reading!
As always feel free to email me any questions at all!