## Soft skin

Currently working on a library of skinning called SofSkin. It’s purpose is to make skinning in a different way: less rubberish and more mesh/geometry-oriented. The idea is to use edges of the mesh as springs that tries to recover their initial length by pulling/pushing on the vertices. The lexical field is focused on anatomy, to keep the code lifefull.

The integration in godot is on its way, as shown in this video:

The repository is hosted in gitlab: https://gitlab.com/frankiezafe/SoftSkin

## The perfect halo function.

The halo is generated by making a linear blending between the sine on a sigmoided angle (sinus1) and a standard sine (sinus2 ).

The sigmoided sine has the advantage of being powerful far from the center:

+

The main problem with this method is the very short blurred area, making it too sudden.

The basic sine method has exactly the opposite problem, tiny white area but plenty of blur.

To combine both in a smooth way, the code is slowly changing the weights of both process accordingly to the distance (pc).

The java code (processing.org), looks like this:

float pc = distance_to_center/256;
float a = pow( pc, 1.5 ) – 0.3;
float sig = 1 / ( 1 + exp( ( -8 + a * 16 ) ) );
float sinus1 = ( 1 + sin( -HALF_PI + ( sig * PI ) ) ) * 0.5;
float sinus2 = 1 – ( ( 1 + sin( -HALF_PI + ( pc * PI ) ) ) * 0.5 );
d = 255 * ( sinus1 * (1-pc) + sinus2 * pc );

## Straight skeleton C++ implementation

Long time no seen!

Lot of things happens in may and june, that’s one of the reason of this long silence. The other is the library i’m working on to generate the maps of Disrupted Cities. Implementing a complex process in an efficient way is harsh. But it starts to works nicely.

A small openframeworks demonstrates the part of the lib dedicated to shrink the 2D shapes, used later in the map to generate the block of houses.

Shrink Demo on bitbucket.

See you soon for more.

• #### frankiezafe 16:33 on 2017-03-26 Permalink | Reply Tags: algorithm ( 13 ), code, math, processing.org ( 4 )

To generate the bocks of building based on the roads structure, the method I’m building is based on a simple idea: when you arrives at a crossroad, you take the first street on the right and you go on like this until you reach a dead-end or your starting point. If you reach your starting point, the succession of roads you took defines a block of building. In theory. This technique has been suggested by Michel Cleempoel, on the way back from school.

After a bit of preparation of the road network (removing orphan roads, having no connection with others, and dead-ends parts of the roads), the real problem arouse: how do you define right in a 3d environment, without an absolute ground reference. Indeed, I can configure the generator to use the Y axis (top axis in ogre3d) in addition to X & Z.

At a crossroad, you may have several possibilities of roads. In the research, these possible roads are reduced to 3d vectors, all starting at world’s origin. The goal is to find the closest vector on the right of the current one, called the main 3d vector in the graphic above..

The right is a complex idea, because it induces an idea of rotation. The closest on the right doesn’t mean the most perpendicular road on the right side. Let say I have 4 roads to choose from. Two going nearly in the opposite direction of the road i’m on, one perpendicular and one going straight on.

If I compute the angles these roads have with the current one, results are:

1. 5°,
2. -5°,
3. 90°,
4. and 170°.

The winner is not the 90°, but the 5° road! If I sort them, the last one must be the -5°, who is the first on the left.

##### 3d plane from a 3d vector

The first thing to do is to define reference plane. To do so, you get the normal vector of the road by doing a cross product with the UP axis (Y axis in this case). The normal gives you a second vector, perpendicular to the road, and therefore defines a plane. Let’s call it VT plane, for Vector-Normal plane. For calculation, we need the vector perpendicular to this plane, rendered by crossing the road and its normal, let’s call it the tangent vector. Until here, it’s basic 3d geometry.

##### projection of 3d vectors on a plane

We can now project all the possible roads on the VT plane. These are the yellow vectors in the graphic. The math are clearly explained tmpearce on stackoverflow. Implemented in processing, it gives:

```      float d = othervector.dot( tangent );
PVector projectedvector = new PVector();
projectedvector.mult( d * -1 );
```

We are nearly done!

##### angle between 3d vectors

The projected vectors will help the angle calculation. Indeed, the current vector and the projected ones being coplanar, they share the same normal. The way to get the angle between 2 coplanar vectors is described by Dr. Martin von Gagern, on stackoverflow, once again. See Plane embedded in 3D paragraph for the code i’ve used.

And… tadaaammmm! The number rendered by the method is the angle i was searching for, displayed in degrees in the graphic above.

• #### frankiezafe 20:50 on 2017-03-18 Permalink | Reply Tags: algorithm ( 13 ), disrupted cities ( 10 ), math, Openframeworks ( 9 )

Result of different configuration of network at each pass. In each image, you see the road network alone and the network with the control grid. I’m proud to mention that the generation time on a big network is taking around 500 millis, something easy to hide with a small transition.

Here, there are 3 + an initial road (the thick one). In each pass, the road becomes smaller and thinner.

It’s also possible to generate the same network with depth enabled. It’s becoming very complex to follow visually, but it makes no mistake 🙂

• #### frankiezafe 17:46 on 2017-03-18 Permalink | Reply Tags: dev ( 52 ), generative ( 10 ), math, Openframeworks ( 9 ), random, research ( 10 )

• generation of normal and tangent for each segment (cyan & purple vectors): they can be used easily to generate a new road starting from any point;
• a million better random selection, based on the formula: X1 = a*X0 + b % m;

This random generation merits a bit of attention.

Until now, i was randomly picking a new start point from an existing road to create a new road. The process is consuming, and there is no guarantee to avoid picking several time the same point on the same road.

With the formula above, found in the great numberphile channel (see below), I attribute once and for all a random to each road’s dot. The particularity of this random generation is that it will NEVER repeat two times the same value in one sequence. Once generated, my dots have a number in the range [0,1], with a linear distribution.

For instance, in a line having 10 dots (and therefore 9 segments), each dot will have a random number between 0 and 1. If you order the list of dots by random values, and compare the gap between each sorted values, the average gap will be 0.1!

The way to use this random number is straight forward. If you want to generate a secondary road on 50% of the dot of the first one, you just have to loop over these number and check wherever the random value of the dot is < 0.5. If the distribution was not linear, doing this would not guarantee to create on sub-road every two dots. As it is, you can just specify the percentage, all random calculation has already been done, and in a more controlled way then ofRandomuf() does it.

This formula requires big prime numbers (>10000) to be placed at a and b. Here is the source i used: list of primes.

• #### frankiezafe 21:11 on 2016-09-17 Permalink | Reply Tags: 3D ( 3 ), dev ( 52 ), math, Ogre3D ( 57 ), trigonometry

Trigonometry hell.

Finishing the day with cool code running:

• World coordinates to camera coordinates, especially usefull for sound sources. Indeed, top, right and front depends on the relative position of the source in the camera space, not the global one.
• Transformation of the sticks direction into world coordinates relative to camera, once again. This time, calculation is based on the lookat location, keeping the system centered on the screen.

Second point was trickier but solved first… Finding the position of an object in the camera space is basically a conversion of reference: center of the world is not (0,0,0) anymore, but camera world location + up is not (0,1,0) anymore, but camera orientation.

A bit of code, it can help to understand the trick:

// creation of the camera matrix, not sure it’s possible to retrieve it easier…
Matrix4 cam_mat = Matrix4( cam->getDerivedOrientation() );
cam_mat.setTrans( cam->getDerivedPosition() );
// inversion of the cam matrix
Matrix4 cam_mat_inverse = cam_mat.inverse();
// for a given vector expressed in global
Vector3 v( 10, 5, -45 );
// construction of a matrix representing this translation
Matrix4 m = Matrix4::IDENTITY;
m.setTrans( rel );
// MAGIC! > conversion to camera space
m = cam_mat_inverse * m * cam_mat;
// and, finally, getting back the coordinates in camera space
v = relm.getTrans();

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r