Sunday, 16 March 2014

Using web map services with three.js

In my previous three.js experiments (1, 2, 3, 4, 5) I've only used static files to load terrains and textures. Jarle Pedersen at the Norwegian Mapping Authority has been experimenting with web map services to create 3D terrains.

Jarle is using a modified Web Coverage Service (WCS) to transfer elevation data and a Web Map Service (WMS) for the texture. As WCS don't support a raster format easily parsed with JavaScript, he extended the service to deliver data in a gridded XYZ format (easting, northing and elevation). I've used this service to get elevation data for my map of Jotunheimen:,6790000,492000,6850000&WIDTH=200&HEIGHT=200

The service returns data as text:

432150 6849850 1394.54736328125
432450 6849850 1448.778076171875
432750 6849850 1493.23046875
433050 6849850 1517.87548828125

The coordinates are in UTM 32 (meters). It's clearly not the most compact format. As long as we operate on a regular grid or triangle mesh we only need the altitude values. The altitude values could also be transferred in a binary format with less precision as shown in my previous blog post.

I've created a WCS terrain loader which sends the request and returns the altitude values from the response in a callback function.

var terrainLoader = new THREE.TerrainLoaderWCS();
terrainLoader.load(wcsUrl, function(data) {

Remember that you can only use this loader with a Web Coverage Service which returns data in the same XYZ format.

Getting texture data from a Web Map Service (WMS) works out of the box:,6790000,492000,6850000&WIDTH=2048&HEIGHT=2048&LAYERS=...&FORMAT=image/png

var material = new THREE.MeshPhongMaterial({
  map: THREE.ImageUtils.loadTexture(wmsUrl)

The map of Jotunheimen looks like this using these services (click to see in WebGL):

By using web maps services we can just change the geographic bounds and get map data for our area of interest:,6675000,49000,6735000


Lofoten Islands

For code simplicity I'm only operating on rectangular areas. See Jarle Pedersen's repository on GitHub if you want more choice.

See for the license terms and for details on the web services.

Tuesday, 7 January 2014

Creating wee planets with GDAL

In my last blog post, we projected an equirectangular photograph on a sphere using three.js. Today, we'll use the same panorama to create a wee (miniature) planet.

A wee planet is a panorama projected to look like small planets using a stereographic projection. There is a great collection of wee planets from Alexandre Duret-Lutz on Flickr.

There are image software you can use to create wee planets, but why not be a real map nerd and use GDAL instead? We already have a 360°x180° panorama in an equirectangular projection. The only difference from a world map is that the panorama is not georeferenced. Let's do it with gdal_translate:

gdal_translate -of GTiff -a_ullr -180 90 180 -90 -a_srs EPSG:4326 bergsjostolen.jpg bergsjostolen.tif

We pretend our new GeoTIFF to be the world, and use gdalwarp to create a wee planet using the stereographic projection:

gdalwarp -wo SOURCE_EXTRA=10000 -r bilinear -te -22500000 -22500000 22500000 22500000 -ts 1000 1000 -s_srs EPSG:4326 -t_srs EPSG:3031 bergsjostolen.tif bergsjostolen-planet.tif

I'm using the Antarctic Polar Stereographic projection. If we use the same projection on a world map it will look like this:

I wanted the house on the top, and copied the projection string and changed the longitude rotation to -160:

gdalwarp -wo SOURCE_EXTRA=10000 -r bilinear -te -22500000 -22500000 22500000 22500000 -ts 1000 1000 -s_srs EPSG:4326 -t_srs "+proj=stere +lat_0=-90 +lat_ts=-71 +lon_0=-160 +k=1 +x_0=0 +y_0=0 +ellps=WGS84 +datum=WGS84 +units=m +no_defs" bergsjostolen.tif bergsjostolen-planet.tif

There you go, a wee planet from a map nerd!

Snow kiters at Bergsjø.

Bergsjøstølen in the blue hour.

Monday, 6 January 2014

Photo spheres with three.js

Ever wanted to create those fancy-looking 360° panoramas? If you mix a mobile phone with three.js and some knowledge about (map) projections, it's quite easy. 

I love travelling, especially to mountainous areas of our planet. Taking photos of vast landscapes can be challenging. This drew my interest in spherical panoramas, photographs that captures everything around you - up, down and all around - 360 x 180 degrees. 

Google calls it a photo sphere, and if you own a mobile phone with Android 4.2 or higher, you can easily create one yourself. I captured this winter scene with my phone a few days ago. But what if you want to get rid of the Google wrapping and show your panorama in your own viewer? Let's try to do it with three.js.

If you download the image from your phone it looks like this:

This is called an equirectangular photograph, which will be entirely seamless when wrapped around a sphere. It has the same projection as most of the world maps you download on the web. You'll find several funny-looking equirectangular photographs on Flickr.

Wrapping such images around a sphere is easy with three.js (read the details here):

var sphere = new THREE.Mesh(
  new THREE.SphereGeometry(100, 32, 32),
  new THREE.MeshBasicMaterial({
    map: THREE.ImageUtils.loadTexture('bergsjostolen.jpg')

You simply create a mesh of a geometric sphere and a material containing your image as a texture.

This will create a christmas ball effect your photo. Next we need to place the camera inside the sphere. You won't see much unless you invert the mesh "inside-out" by setting:

sphere.scale.x = -1;

I'm using the orbit controls to rotate the camera and allow the user to look around inside the sphere.

var controls = new THREE.OrbitControls(camera);
controls.noPan = true;
controls.noZoom = true; 
controls.autoRotate = true;
controls.autoRotateSpeed = 0.5;

Pan and zoom are disabled, as they're not fit for purpose. Instead I'm allowing the user to change the field of view (code from this three.js example):

function onMouseWheel(event) {
  if (event.wheelDeltaY) { // WebKit
    camera.fov -= event.wheelDeltaY * 0.05;
  } else if (event.wheelDelta) { // Opera / IE9
    camera.fov -= event.wheelDelta * 0.05;
  } else if (event.detail) { // Firefox
    camera.fov += event.detail * 1.0;

  camera.fov = Math.max(40, Math.min(100, camera.fov));

document.addEventListener('mousewheel', onMouseWheel, false);
document.addEventListener('DOMMouseScroll', onMouseWheel, false);

I'm also using the detector script to use the Canvas renderer when WebGL is not supported:

var renderer = Detector.webgl ? new THREE.WebGLRenderer() : new THREE.CanvasRenderer();

Now we have our own photo sphere viewer:

[ Fullscreen ]

Feel free to improve it on GitHub.

Bergsjøstølen at dawn.

Towards Reineskarvet.

Hesthovdstølen shielding at 1155 m. 

Friday, 6 December 2013

3 months mapping leave from work - what to do?

I've just started a 3 months study leave from the Norwegian Broadcasting Corporation where I'm spending my savings to acquire new mapping skills. Originally, my plan was to spend the time in South America but I had to postpone this adventure. Instead, I'll spend some time in Norway, Turkey, the Dolomites/Alps - and where?!?

Do you know of any interesting mapping events going on before 10th March? Would you like some help on an exciting mapping project? Please notify me

My study plan is to improve my cartography skills, learn how to map with D3.js and to continue my 3D experiments with three.js. 

Winter in Norway

Saturday, 2 November 2013

Showing GPS tracks in 3D with three.js and d3.js

How can you visualize a GPS track with three.js? The tricky part was the get the projection right so the GPS track would line up with my terrain map of Jotunheimen. With the help of D3.js, I was able to do what I wanted.

I'm going to use the same GPS track that I've used previously with Leaflet. My plan was to convert this track to GeoJSON, but all the tools I tried didn't include the elevation values for my track. Instead of building my own converter, I decided to parse the GPX file directly in JavaScript. GPX is an XML format, and D3.js supports reading XML files with the d3.xml method:

d3.xml('jotunheimen-track.gpx', 'application/xml', gpxParser);

The gpxParser function (shown below) is called when the GPX file is loaded, passing in the root element of the XML document. You can parse this document with the JavaScript XML/DOM access facilities.

My GPS track is in geographic coordinates (latitude and longitude) and I need to project the points to be able to show the track on my map of Jotunheimen. Three.js don't have any support for different map projections, but we can use the great projection support of D3.js. My map is in the UTM 32 projection, and I've played with UTM in D3.js previously. 

The Universal Transverse Mercator (UTM) projection is based on the cylindrical Transverse Mercator projection, which is supported by D3.js. This is how I define the projection:

var terrainSize = 60; // 60 x 60 km

var projection = d3.geo.transverseMercator()
    .translate([terrainSize / 2, terrainSize / 2])
    .scale(terrainSize * 106.4)
    .rotate([-9, 0, 0])
    .center([-0.714, 61.512]);    

The terrainSize can be any size, but I'm using 60 as the area of Jotunheimen I'm mapping is 60 x 60 km. It took me some time to find the values used in the projection configuration methods
  • translate: The pixel coordinates of the projection’s center. 
  • scale: The scale factor corresponds linearly to the distance between projected points. I figured out that this was terrainSize multiplied by 106.4 for my example, but I don't know why exactly 106.4...
  • rotate: I'm rotating the projection minus 9 degrees longitude which corresponds to the central meridian of the UTM 32 zone. 
  • center:  The longitude and latitude of the projection’s center. This is the same as the center of my map (8.286, 61.512), except that the longitude position is relative to the central meridian of UTM 32 (8.286 - 9 = -0.714).
With these numbers sorted, my GPS track was lining up correctly on my map. But how to render the track in three.js? I'm adding a vertex for each track point to a THREE.Geometry object. This is the code of my GPX parser:

function gpxParser(gpx) {
  var tracks = gpx.getElementsByTagName('trk'), 
      geometry = new THREE.Geometry();

  for (i = 0; i < tracks.length; i++) { 
    var points = tracks[i].getElementsByTagName('trkpt')

    for (x = 0; x < points.length; x++) { 
      var point = points[x],
          ele = parseInt(point.getElementsByTagName('ele')[0].firstChild.nodeValue),
          lat = parseFloat(point.getAttribute('lat')),
          lng = parseFloat(point.getAttribute('lon')),
          coord = translate(projection([lng, lat]));

       geometry.vertices.push(new THREE.Vector3(coord[0], coord[1], (ele / 2470 * heightFactor) + (0.01 * heightFactor))); 

  var material = new THREE.LineBasicMaterial({
    color: 0xffffff,
    linewidth: 2

  var line = new THREE.Line(geometry, material);

function translate(point) {
  return [point[0] - (terrainSize / 2), (terrainSize / 2) - point[1]];

This function is extracting the elevation, latitude and longitude values from the GPX track and creating a vertex for each point. The coordinate is projected using our D3 projection object and translated to the coordinate space of three.js, as explained in this blog post:
In Three.js the coordinate system works as follows. Point (0,0,0) is the center of the world. As you look at your screen, positive x values move to the right and negative x values move to the left. Positive y values move up and negative y values move down. Positive z values move toward you and negative z values move away from you.
The elevation values are also multiplied by a height factor which is the same I've used for the terrain. In addition I'm adding a small offset, so the track is rendered slightly above the ground. 

I'm using THREE.LineBasicMaterial to create the line style, and THREE.Line to put the line geometry and material together before adding it to the scene. You can see the white line on the map (click to see in WebGL): 

The code is available on Github. An alternative would be skip the elevation values in the GPS track and instead clamp the track to the terrain, but I haven't found an easy way to do this with three.js.  

Tuesday, 15 October 2013

Terrain visualization with three.js and Oculus Rift

My good colleague, Harald K. Jansson, owns an Oculus Rift headset, and he couldn’t resist creating a virtual reality version of my 3D map of Jotunheimen. It’s very impressive, especially when you know that everything runs in the browser.

If you’re lucky enough to own a Oculus Rift headset you can try yourself by clicking on the image below.

The demo is built using two Oculus plugins:

We’ve also switched from TrackballControls to FirstPersonControls which allows you to fly through the landscape. Mouse navigation is disabled as it didn`t work well with the Oculus control, but you can move around with your keyboard:
  • A / “arrow left”: Move left
  • D / “arrow right”: Move right
  • W / “arrow up”: Move forward
  • S / “arrow down”: Move backward
  • R: Move up
  • F: Move down

The code is avaiable on GitHub. Enjoy!

Monday, 14 October 2013

Textural terrains with three.js

This is the third blog post in my "terrain mapping with three.js" series. With the terrain geometry in place, it's time to create a texture and drape it over the terrain.

In the first blog post in this series, we created a digital elevation model (DEM) for Jotunheimen. We can use this DEM to create color relief and hillshade that will enhance the view of our terrain (this blog post explains the details).

gdaldem color-relief jotunheimen.tif color-relief.txt jotunheimen-relief.tif

Creating nice-looking hillshades got easier with GDAL 1.10. By adding the "combined" option to gdaldem you get a slope-enhanced shaded relief:

gdaldem hillshade -combined jotunheimen.tif jotunheimen-hillshade.tif

I'm combining color relief and hillshade with Mapnik, and at the same time adding colors for lakes, glaciers and trees using open data from the Norwegian Mapping Authority (this blog post explains the details). jotunheimen-texture.xml jotunheimen-texture.png --projected-extent 432000 6790000 492000 6850000 -d 4096 4096

The texture styles are available on GitHub. The final texture (converted to JPG to save some bandwith) looks like this:

Adding the texture to the terrain mesh is dead easy: Use THREE.ImageUtils.loadTexture to load the texture and add it to the map property of the material:

var material = new THREE.MeshPhongMaterial({
  map: THREE.ImageUtils.loadTexture('../assets/jotunheimen-texture.jpg')

We also need to add some light to see the texture:

scene.add(new THREE.AmbientLight(0xeeeeee));

That's all! Our 3D terrain now looks like this (click to see in WebGL):

View of Jotunheimen from the south.

View of Jotunheimen from the north.  Galdhøpiggen, the highest mountain in Norway (2469 m), is among the glaiers in the middle. 

The code is available on Github. In the next blog post, we'll create a virtual reality version of our terrain.

Wednesday, 9 October 2013

Terrain building with three.js

In my last blog post, we converted a digital elevation model (DEM) to a WebGL-friendly format (i.e. easy to transfer and parse by JavaScript in the browser). In this blog post, we'll use the elevation data to build a terrain mesh with three.js

First we need to transfer the terrain data to the browser. The elevation values are stored in a binary file as 16-bit unsigned integers. This page explains how you can send and receive binary data using JavaScript typed arrays. I've created a TerrainLoader by adapting the XHRLoader. You can also use this function:

function loadTerrain(file, callback) {
  var xhr = new XMLHttpRequest();
  xhr.responseType = 'arraybuffer';'GET', file, true);
  xhr.onload = function(evt) {    
    if (xhr.response) {
      callback(new Uint16Array(xhr.response));

Loading elevation data with the TerrainLoader is easy:

var terrainLoader = new THREE.TerrainLoader();
terrainLoader.load('../assets/jotunheimen.bin', function(data) {

The loader will return the elevation data as an array:

[37068, 38613, 39605, 40451, 39655, 38843, 38843, 38857, 39042, 39316, 40165, 40738, 40369, 39483, 37175, 34492, 32436, 31600, 33473, 37514, ...]

To preserve as much detail as possible, we were scaling the floating point elevation values (0-2470) to the full range of a 16-bit unsigned integer (0-65535). You'll find the elevation value with this formula: 

var height = Math.round(value / 65535 * 2470);

[1397, 1455, 1493, 1525, 1495, 1464, 1464, 1465, 1471, 1482, 1514, 1535, 1521, 1488, 1401, 1300, 1223, 1191, 1262, 1414, ...]

From heights stored in an array to a triangle mesh.

Creating a triangle mesh from our elevation data is straightforward, and three.js does the heavy work for us. Use THREE.PlaneGeometry to create a ground plane:

var geometry = new THREE.PlaneGeometry(60, 60, 9, 9);

The first two arguments are the width and height of the plane geometry. The next two arguments are the number of width and height segments. It will create this triangle mesh (click to see in WebGL):

The vertices are at the corners of the triangles. Each vertex has a X/Y/Z position. The red line is the X axis,  the green line is the Y axis and blue line is the Z axis (height). 

The width and height segments should be the width and height of your elevation grid minus one. The binary file we created is 200 x 200, so our plane geometry is 199 x 199:

var geometry = new THREE.PlaneGeometry(60, 60, 199, 199);

three.js stores all vertices in an array (geometry.vertices) similar to our array of height values (row by row, top to bottom). This makes it very easy to change the Z position:

for (var i = 0, l = geometry.vertices.length; i < l; i++) {
  geometry.vertices[i].z = data[i] / 65535 * 25;

If we render this geometry we get this result (click to see in WebGL):
Jotunheimen wireframe

var geometry = new THREE.PlaneGeometry(60, 60, 199, 199);

for (var i = 0, l = geometry.vertices.length; i < l; i++) {
  geometry.vertices[i].z = data[i] / 65535 * 10;

var material = new THREE.MeshPhongMaterial({
  color: 0xdddddd, 
  wireframe: true

var plane = new THREE.Mesh(geometry, material);

Besseggen wireframe

So we got our 3D landscape with just a few lines of code (available on GitHub). In the next blog post we'll add some texture to make it more realistic.

Romsdalseggen ridge in August. 

Sunday, 6 October 2013

Converting terrain data to a WebGL-friendly format

Three.js is a very promising tool if you want to add a third dimension to your web maps. In my last blog post, I showed how easy it was to create a WebGL Earth in the browser. Today, I'm starting a new blog series about terrain building with three.js.

Last year, I wrote a blog series about all the fun you can do with digital terrain data:
  1. Digital terrain modelling and mapping
  2. Creating hillshades with gdaldem
  3. Creating color relief and slope shading with gdaldem
  4. Terrain mapping with Mapnik
  5. Land cover mapping with Mapnik
  6. Using custom projections with TileCache, Mapnik and Leaflet
  7. Creating contour lines with GDAL and Mapnik
  8. Showing GPS tracks with Leaflet
I will continue using map data from Jotunheimen, a mountainous area of Norway. The 29 highest mountains in Norway are all in Jotunheimen, as well as the deepest valley, Utladalen. It's a great spot for 3D terrain mapping. But the same techniques applies to all terrains, - you only need some terrain data. Instead of Leaflet, we're going to use WebGL and three.js to render the maps. 

Last friday, 27th September 2013, was a milestone in the mapping history of Norway. The Norwegian Mapping Authority released its topographic datasets to the public, free of charge. Included was also a digital elevation model (DEM) of the Norwegian mainland, at 10 meters resolution. You can download the data on this page (unfortunately only in Norwegian), under a CC BY 3.0 licence. The terrain files created in this blog post are also available on GitHub.

Norway is divided into 50 x 50 km tiles, and you can select the areas you want to download on a map.
We need 4 tiles to cover Jotunheimen.  

The DEM files are using the USGS file format for terrain data, and I've selected the UTM 32N projection for my data. I'm using gdalbuildvrt to create a combined virtual dataset (jotunheimen.vrt) of the DEM files: 

gdalbuildvrt jotunheimen.vrt 6704_1_10m_z32.dem 6704_4_10m_z32.dem 6804_2_10m_z32.dem 6804_3_10m_z32.dem

Then we can use gdalwarp to clip the DEM to the area of interest and convert to GeoTIFF

gdalwarp -te 432000 6790000 492000 6850000 jotunheimen.vrt jotunheimen.tif

Use gdalinfo to see the properties of this image:

gdalinfo -mm jotunheimen.tif

Driver: GTiff/GeoTIFF
Files: jotunheimen.tif
Size is 6000, 6000
Coordinate System is:
PROJCS["WGS 84 / UTM zone 32N",
    GEOGCS["WGS 84",
            SPHEROID["WGS 84",6378137,298.257223563,
Origin = (432000.000000000000000,6850000.000000000000000)
Pixel Size = (10.000000000000000,-10.000000000000000)
Image Structure Metadata:
Corner Coordinates:
Upper Left  (  432000.000, 6850000.000) (  7d42'39.90"E, 61d46'36.81"N)
Lower Left  (  432000.000, 6790000.000) (  7d43'59.45"E, 61d14'18.21"N)
Upper Right (  492000.000, 6850000.000) (  8d50'54.02"E, 61d46'58.29"N)
Lower Right (  492000.000, 6790000.000) (  8d51' 3.39"E, 61d14'39.21"N)
Center      (  462000.000, 6820000.000) (  8d17' 9.19"E, 61d30'42.34"N)
Band 1 Block=6000x1 Type=Float32, ColorInterp=Gray
    Computed Min/Max=1.900,2467.800

What does it tell us?
  • The image or grid size is 6000 x 6000 pixels or cells.
  • The coordinate system is UTM zone 32N using the WGS 84 spheroid.
  • The coordinate unit is meters.
  • Each pixel or grid cell covers an area of 10 x 10 meters.
  • The extent of the area is defined by four coordinate pairs, covering an area of 3,600 km² (60 x 60 km).
  • The image is single band and the elevation data is stored as 16-bit integers. The minimum elevation is 1.9 meters and the maximum is 2,467.8 meters.

We got the data and the information we need to start terrain modelling. Our first task is to transfer the terrain data to the browser. As most web browsers don't support TIFF-files, it's common to convert the DEM into a heightmap. It can be thought of as a grayscale image where the intensity of each pixel represents the height at that position. Black indicates the minimum height and white the maximum height. It's easy to create a heightmap with gdal_translate:

gdal_translate -scale 0 2470 0 255 -outsize 200 200 -of PNG jotunheimen.tif jotunheimen.png

This command will create a PNG where the height values are reduced to 256 shades of gray. I'm also reducing the size from 6000 x 6000 px to only 200 x 200 px to save our GPU. Each pixel or elevation value is now covering an area of 300 meters.  

We can then transfer the image to the browser, and read the height values directly from the image. This can be sufficient for many uses, but I'm afraid my terrain will look "blocky" with only 256 height values. It's possible to use colour channels to get a wider height span with PNGs, but I couldn't find an easy way to achieve this with GDAL. By reading these notes, I saw that I could use a format called ENVI. In this format the height values are provided as a sequence of raw bytes, and I'm storing the values as 16-bit unsigned integers:

gdal_translate -scale 0 2470 0 65535 -ot UInt16 -outsize 200 200 -of ENVI jotunheimen.tif jotunheimen.bin

To preserve as much detail as possible, I'm scaling the floating point elevation values to the full range of a 16-bit unsigned integer (0-65535). I'm also creating a heightmap in full 10 m resolution of the Besseggen mountain ridge (2 x 2 km): 

gdalwarp -te 484500 6818000 486500 6820000 jotunheimen.vrt besseggen.tif

gdal_translate -scale 982 1742 0 255 -of PNG besseggen.tif besseggen.png

gdal_translate -scale 982 1905 0 65535 -ot UInt16 -of ENVI besseggen.tif besseggen.bin

Lastly, I'm creating a tiny heightmap of only 10 x 10 px for testing purposes: 

gdal_translate -scale 982 1905 0 255 -outsize 10 10 -of PNG besseggen.tif besseggen10.png

gdal_translate -scale 982 1905 0 65535 -outsize 10 10 -ot UInt16 -of ENVI besseggen.tif besseggen10.bin

In the next blog post, we'll start playing with our terrain data in three.js.

Autumn in Nordmarka in Oslo. Photo: Bjørn Sandvik, 5th October 2013. 

Saturday, 28 September 2013

Creating a WebGL Earth with three.js

This blog post will show you how to create a WebGL Earth with three.js, a great JavaScript library which helps you to go 3D in the browser. I was surprised how easy it seemed when reading a blog post by Jerome Etienne. So I decided to give it a try using earth textures from one of my favourite cartographers, Tom Patterson.

WebGL is a JavaScript API for rendering interactive 3D graphics in modern web browsers without the use of plug-ins. Three.js is built on top of WebGL, and allows you to create complex 3D scenes with a few lines of JavaScript. If your browser supports WebGL you should see a rotating Earth below:

[ Fullscreen ]

To be able to display something with three.js, you need three things: a scene, a camera and a renderer.

var width  = window.innerWidth,
    height = window.innerHeight;

var scene = new THREE.Scene();

var camera = new THREE.PerspectiveCamera(45, width / height, 0.01, 1000);
camera.position.z = 1.5;

var renderer = new THREE.WebGLRenderer();
renderer.setSize(width, height);

The scene is the container used to store and keep track of the objects (earth and stars) we want to render. The camera determines what we'll see when we render the scene. I'm using a PerspectiveCamera, which is what we usually associate with seeing the world. The renderer is responsible to render the scene in the browsers. Three.js supports different renderers like WebGL, Canvas, SVG and CSS 3D. We use window width and height to allow our earth to fill the browser window.

Next we have to turn on the light:

First: Ambient light - Second: Directional light - Third: Ambient and directional

scene.add(new THREE.AmbientLight(0x333333));

var light = new THREE.DirectionalLight(0xffffff, 1);

Three.js support different light sources that have specific behaviour and uses. I'm using ambient and directional light:
  • Ambient light: Basic light which is applied globally.  The dimmed ambient light shows areas away from the sun.
  • Directional light: Light that mimics the sun. All the light rays we receive on Earth are parallel to each other. 

Ok, we got the scene, camera, renderer and light sorted. It's time to model our Earth using sphere geometry and material, which is referred to as a mesh in three.js:

new THREE.Mesh(
  new THREE.SphereGeometry(0.5, 32, 32),
  new THREE.MeshPhongMaterial({
    map: THREE.ImageUtils.loadTexture('images/2_no_clouds_4k.jpg'),
    bumpMap: THREE.ImageUtils.loadTexture('images/elev_bump_4k.jpg'),
    bumpScale:   0.005,
    specularMap: THREE.ImageUtils.loadTexture('images/water_4k.png'),
    specular: new THREE.Color('grey')   })

The sphere is created using THREE.SphereGeometry. The first parameter is the radius, and the second and third parameter is the number of width and height segments. The sphere is drawn as a polygon mesh, and by adding more segments it will be less "blocky" - and take more time to render.

A sphere rendered with 8, 16 and 32 width/height segments. 

Next, we use THREE.MeshPhongMaterial to wrap map data around the sphere. This material is used to create shiny materials, and we use it to make the ocean reflective.

I'm using map data from This is a great collection of shaded relief maps by cartographer Tom Patterson. Natural Earth III is collection of raster map data tailored towards visualising Earth from space. Compared to photographs of Earth, Natural Earth III offers brighter colours, fewer clouds and distinct environmental zones. The maps are very pleasant to look at.

Let`s start with a texture map of the Earth without clouds:

I reduced the image size to 4096 x 2048 px, which was the maximum texture size for the GPU of my computer. If you want more detailed textures you need to slice up the Earth.

Second, I`m using a bump map to enhance the view of the mountains:

Bump mapping is a technique to simulate bumps and wrinkles on the surface of an object. The result is an apparently bumpy surface rather than a smooth surface although the surface of the underlying object is not actually changed. I'm sorry, you can't tilt the camera to see 3D mountains with this technique. You can adjust the bump effect (how much the map affects lighting) with the bumpScale parameter.

The original image (left) contains shaded relief, so the bump map effect is limited on this texture (right).   

Lastly, I want to make the ocean and lakes reflective by applying a land/water mask. This specular map defines the surface's shininess. Only the sea is specular because water reflects water more than earth. You can control the specular color with specular parameter.

Adding a specular map to make the ocean reflective (right).

Our Earth looks good, but I still miss some clouds. 64 percent of Earth's surface is obscured by clouds on an average day, so this cloud texture received Photoshop edits to remove clouds from land areas.

I couldn't use this JPEG directly in three.js, so I uesd this technique to make a transparent PNG (available on GitHub). I then created a new sphere mesh with a slightly larger radius:

new THREE.Mesh(
  new THREE.SphereGeometry(0.503, 32, 32),
  new THREE.MeshPhongMaterial({
    map: THREE.ImageUtils.loadTexture('images/fair_clouds_4k.png'),
    transparent: true

Our planet on a very clear day.

The last thing we'll add to our scene is a galaxy starfield:

Starfield from the "Learing Three.js" website.

The starfield is created by adding a large sphere around the Earth and project the star texture on the backside or inside:

new THREE.Mesh(
  new THREE.SphereGeometry(90, 64, 64), 
  new THREE.MeshBasicMaterial({
    map: THREE.ImageUtils.loadTexture('images/galaxy_starfield.png'), 
    side: THREE.BackSide

Earth in space.

As a final touch, let's add some interaction:

var controls = new THREE.TrackballControls(camera);


function render() {
  sphere.rotation.y += 0.0005;
  clouds.rotation.y += 0.0005;
  renderer.render(scene, camera);

I've added the trackball controls plugin by Eberhard Graether,  which allows you to rotate, zoom and pan the scene. The render function is called at a specific interval defined by the browser using the requestAnimationFrame function. The sphere and clouds are rotating around its axis by increasing the y parameter.

Here you can see the final result!

A long blog post, but only 84 lines of code, - available on Github