There are a lot of reasons you might want to draw on a texture. Maybe you want your players to literally be able to draw on something. Maybe you want to add decals to an image. Maybe you just want to procedurally create simple images like circles and not have them take up a bunch of space in your app download.
If so, your first mistake will likely be to use SetPixel to draw on pixels. This will work, but it's agonizingly slow if you're doing more than one pixel. So you do a little reading, learn that GetPixels/SetPixels is faster than setting them one at a time, and dust off your hands and call it a job well done. Right?
Nope. There's a problem with SetPixels, and it's a pretty fundamental one: It's CPU-bound. Modern GPUs are orders of magnitude more powerful than the general-purpose processors. A modern Core i7 will probably have around 70 GigaFLOPS (floating point operations per second); a modern GPU like the GTX 1080 will have around 10-12 teraFLOPS. Now these values aren't directly comparable for a number of reasons, including the greater versatility of CPUs and the massive parallelization of GPUs, but without a doubt the graphics cards have a lot of potential that is being untapped by using software drawing methods SetPixels. Even if your users aren't running the latest top-of-the-line GPU (or are on a mobile device), their graphics hardware will still be better at drawing stuff simply because that's what it's designed for. The bottom line is that any processing (especially graphics-related) than can be moved to the GPU, probably should be.
So how do you offload, for example, drawing of circles into textures to the GPU? I can see fear in the eyes of some of you, worrying that I'm about to give you a lesson in shader programming. And while that's certainly a good way to draw stuff onto textures, it's a whole new field of expertise, and one I'm most certainly not qualified to write an article about. Instead, I'm going to teach you a disgustingly simple strategy which takes full advantage of Unity's simplicity and your experience with its API.
Enter the Render Texture
If you're not familiar with Render Textures, you're missing out on one of Unity's most powerful visual tools. In fact, until Unity 5 (and the updated subscription pricing model), the Render Texture was one of maybe three major features that separated the free version from Unity Pro - and many, many people forked over thousands of dollars just to be able to use Render Textures. Fortunately, these days everyone gets render textures for free, and their ubiquity now only makes them more useful, because it allows articles like this one to exist.
Put simply, a Render Texture is a texture onto which a camera can render an image. It's pretty straightforward, right? This most obvious example of its usage is something like a security camera displaying its image on a screen in-game, or rendering an image of an item into a window to be displayed in the GUI. These are fine uses of Render Textures, but they are capable of so much more if used cleverly. I'm going to describe one such usage here.
The Camera Setup
The in-scene setup of a system like this is crucial to its success. Here are the basics:
Create a new layer that will be used explicitly for this camera. Ensure that all other cameras in the scene have this layer removed from their culling masks.
Create a new Camera. Let's call this "Drawing Camera" for reference.
Place it at the location (0.5, 0.5, -1) in world space. Set its render mode to Orthographic, and its Orthographic Size to 0.5. This will ensure that its render area covers the space from (0,0) to (1,1) in the world, and will make placement of objects to be rendered much simpler.
Set the drawing camera's culling mask to include only the layer created in the first step.
Create a Render Texture in your Project pane. It can go anywhere you like. It's possible to create them from script if desired, but for this, I'll use one from the project folder.
Assign this render texture to the "Target Texture" slot of the drawing camera, and then assign it as the texture onto which you're hoping to draw things.
That's pretty much it. We now have a camera setup that can render stuff onto your object, and we haven't even touched the code yet. So how do we use this to draw?
The Background Setup
Our next step will be to set up the image that will be used as our backdrop. This is actually very simple:
Place the desired image in the scene. You can use either a Sprite or a Quad with its own material for this; in the case of the latter, I recommend using a material with an unlit shader, as lighting affecting the render texture would be odd.
Place this object at (0.5, 0.5, 10) and scale it such that its bounds go from 0 to 1. It should match your camera's rendering box precisely.
Set its layer to the layer we created in the first section.
And you're done. If you have used the old texture you had here before as your background, you should now be able to play your game, and it'll look the same as it did before. (It'll be a smidgeon slower because it's now rendering that texture each frame, but we'll clean that up at the end.)
If you don't have a texture, you can just create a Quad with an unlit material that's the color of the background you want.
For Efficiency's Sake: Rendering at will
In many cases, the textures you're working with won't be changing every frame. They'll probably only change in the moments where you're interacting with them. In that case, you will want to avoid rendering the camera when things aren't changing. To do this, simply disable the Camera component in its inspector, and then call camera.Render() after you've interacted with it.
I recommend leaving the camera on while you're getting the configuration set up; it's much easier when you have a realtime view. If you like, you can have your script disable the camera in Start(); that way, during development, you always have a live view.
Example 1: Drawing a circle with a LineRenderer
Now that we have our background, what can we do with it? Well, let's create a new GameObject and add a LineRenderer and a new script to it - let's call it CircleDrawer. In CircleDrawer, we're going to set the LineRenderer's points to be in a circle, using Mathf.Sin and Cos in a loop. The circle will be drawn based on parameters that are chosen in the inspector. For ease of editing, we're also going to add a OnValidate function - which is called every time a value is changed in the inspector. With this, we'll be able to change our circle in realtime in the editor.
public Vector2 center = new Vector2(0.5f, 0.5f);
public float radius = 0.45f;
public float thickness = 0.1f;
public int pointCount = 32;
private LineRenderer lr;
public void UpdateCircle() {
if (lr == null) lr = GetCompoennt<LineRenderer>();
Vector3[] points = new Vector3[pointCount];
for (int p=0; p<points.Length; p++) {
float theta = (float)p * Mathf.PI * 2f / points.Length; // theta will be from 0 to 6.28, perfect for trig operations
points[p] = new Vector3( center.x + Mathf.Cos(theta) * radius, center.y + Mathf.Sin(theta) * radius, 0f);
}
lr.SetPositions(points);
lr.loop = true;
lr.useWorldSpace = true;
lr.widthMultiplier = thickness;
}
void OnValidate() {
UpdateCircle();
}
Make sure that your LineRenderer's layer is set to the same layer we've been using.
Now, you shouldn't need to hit play to see the results - just start editing the values in the inspector, and you should be able to watch the circle move around on your texture.
Now the cool thing is this: the coordinate system being used by the script above should place it at precisely the same coordinates that are used by the UV texture system. So, for example, if you're using a raycast and want to place a circle on the texture using raycastHit.textureCoord, you can assign that value directly to the "center" field, and the circle will be centered where your ray hit the surface!
Example 2: Sketching with raycast
Once you have the camera setup, it's trivial to turn any interaction with a surface into an alteration of the texture of that surface. All you need to do is to use the UV coordinates of the interaction (such as a collision or a raycast) to place the desired effect.
In this example, the player will be able to draw a line on the sphere's surface. This will involve two simple components; the UVDragger will turn mouse drags into UV coordinates, and the TextureLineDrawer will turn those into a series of LineRenderers. Outside of these, the same camera setup as above will turn those LineRenderers into lines on the surface of the sphere.
We will start with the UV coordinates of the mouse on the sphere:
public TextureLineDrawer lineDrawer;
private bool isDrawing = false;
private void Update()
{
if (Mouse.current.leftButton.isPressed)
{
Ray ray = Camera.main.ScreenPointToRay(Mouse.current.position.ReadValue());
RaycastHit hit;
if (Physics.Raycast(ray, out hit))
{
isDrawing = true;
Vector2 mouseUVCoords = hit.textureCoord;
lineDrawer.DrawToMousePosition(mouseUVCoords);
}
}
else
{
if (isDrawing)
{
isDrawing = false;
lineDrawer.StopDrawing();
}
}
}
And then we add this to the parent of the LineRenderers:
public class TextureLineDrawer : MonoBehaviour
{
public List<LineRenderer> lineRenderers;
public LineRenderer lineRendererPrefab;
public Transform lineParent;
private LineRenderer activeLineRenderer;
public List<Vector3> activePoints;
public float minimumDelta = 0.01f;
private void Awake()
{
lineRenderers = new List<LineRenderer>();
activePoints = new List<Vector3>();
}
private LineRenderer CreateLineRenderer()
{
LineRenderer newLineRenderer = Instantiate(lineRendererPrefab, lineParent);
lineRenderers.Add(newLineRenderer);
return newLineRenderer;
}
public void DrawToMousePosition(Vector2 mouseUVCoords)
{
if (activeLineRenderer == null)
{
activeLineRenderer = CreateLineRenderer();
activePoints.Clear();
}
if (activePoints.Count == 0 ||
Vector3.Distance(activePoints[activePoints.Count - 1], mouseUVCoords) > minimumDelta)
{
activePoints.Add(mouseUVCoords);
}
activeLineRenderer.SetPositions(activePoints.ToArray());
activeLineRenderer.positionCount = activePoints.Count-1;
}
public void StopDrawing()
{
activePoints.Clear();
activeLineRenderer = null;
}
}
There are a few more niceties to be added. Since this sphere connects across the edges of the texture, we will need a bit of code to make sure that a horizontal line doesn't streak across the image every time you cross that boundary. That is left as an exercise for the reader.
Example 3: Merging/layering textures
If you have a lot of similar models, an easy way to add variation is to play with their textures. Maybe your spaceships can have more wear and fatigue; maybe they can have text detailing or flag decals. Of course the overall color can be changed. Using the camera texture setup allows you to merge all of these things into your final texture in a straightforward way.
(Web player with this example will be coming soon)
Example 4: Adding UI elements into a texture
Because UI elements can be rendered using a World Space Canvas, adding UI elements into a texture is pretty straightforward. One small wrinkle is that the UI expects to be working on the scale of dozens or hundreds of game units (which in most UI code represent pixels); in order to keep our setup simple and match the UV coordinates, we want to squeeze ours into one game unit. Simply setting the size of the canvas to that will not be convenient, because then you have to change the default text size to something absurdly small, and every other default value in the UI system will be way off. It's far easier to create your Canvas with a size you'd like to work in (say, 1000x1000), and then set its Scale to 1 ÷ that value (say, 0.001).
Opmerkingen