One of my favourite sites on the web is Our World in Data. Their stated goal is to use research and data to make progress against the world’s largest problems. It's obviously an important goal and they have done an excellent job of showing the state of the world and how it has changed over time. From their About page:
It is possible to change the world
To work towards a better future, we need to understand how and why the world has changed in the past. There are two reasons for this:
It shows us that progress is possible. The historical data and research shows that it is possible to change the world. In many important ways global living conditions have improved. While we believe this is one of the most important facts to know about the world we live in, it is one that is known by surprisingly few. Many believe that the world is stagnating or getting worse in aspects where the opposite is true.
The second reason is that it allows us to learn. Progress is possible, but it is not a given. If we want to know how to reduce suffering and tackle the world’s problems we should learn from what was and was not successful in the past.
Here is an example chart from the site with sources clearly provided, options to view as a chart, map, or table and ability to focus on particular countries of interest. You can download the data or embed in your own site as I have done here. It's an excellent resource and I encourage you to take a look!
Right now the SciArt Tweet Storm is happening on twitter. The idea is to advance the presence of images in science communication and culture and was started in 2015 by the Symbiartic team at Scientific American. I have embedded a few of my favourite images below.
The #sciart #TweetStorm week has returned!! Here a collection of beetles I illustrated, one of my favourite subjects. #longhornbeetle #woodland #bugs #illustration #drawing #art #beetles #insectart #insects #Entomology pic.twitter.com/M9PWhTLFxN— Claudia Hahn (@Claudia_Hahn) March 2, 2019
Happy #WorldWildifeDay! For the past decade, I've been doing my best to help save frogs from extinction. You can help support my work by purchasing artwork from https://t.co/q7DbqXx4Xa or from https://t.co/fgnXzar8rJ #WWD2019 #ConservationOptimism #wildlife #sciart pic.twitter.com/HQcOGVduGq— Dr. Jonathan Kolby (@MyFrogCroaked) March 3, 2019
Large group of Cyanobacteria (Oscillatoria sp.) moving back and forth. Sped up 10x. For more info: https://t.co/5Yy8L5qrM2#scicomm #sciart #biology #cynanobacteria #microbiology #microscopy #art pic.twitter.com/t2cfZQF6dM— Julia Van Etten (@CouchMicroscopy) February 27, 2019
This first image shows the population density around Toronto and Markham, where I live. I rotated the view to look southward so the tall bars showing high density in downtown Toronto don't hide the values for the suburbs.
This second view shows the population change during 1990-2015.
Hello everyone! It's been quite a while since my last update. In fact last year, 2018, was the first time I didn't post during a whole year since I started this blog back in 2006.
Here is a simple plot of the number of posts here over the course of time:
I hope to publish more often this year. Thanks for sticking around!
A week or so ago I put together a simple project illustrating the locations of coffee shops within the Toronto area. I was curious about the density of coffee shops within the city and also the distributions of the larger coffee chains. In the image below the small dots are locations and the areas are coloured based on the closest location. The colours are Red - Tim Hortons, Green - Starbucks, Yellow - Second Cup, Purple - Coffee Time, Orange - Country Style, Blue - Other.
The Tim Horton's red dominates much of the geography outside of the city and the location density is obviously much higher in the downtown area. Zooming in to downtown shows a more fractured landscape with strong pockets for Starbucks and the independent or small cafes.
Here is the interactive coffee territory map of Toronto. Data was gathered from OSM, interactive map built using Leaflet, and the voronoi overlay created with D3.
Today is the tenth anniversary of my first post on Neoformix! Thank you all for your attention and feedback over the years. I never dreamed I would be doing this for so long but it's been great fun. Thank you also to all the creators of interesting and informative work in the field of data visualization and creative coding.
I just launched my first mobile app. It's a game, called Stars and Stones, and you can download it now on the Apple App Store for free.
I enjoy games that have a simple natural user interaction, are easy to learn, but have a rich depth of play - an elegant complexity. That's what I attempted to create and I think I came close in many respects. Stars and Stones is a series of dynamic puzzles that get progressively more challenging. There are over 100 levels and the first 50 are free.
In each puzzle you drag a token around to try and capture five stars while avoiding stones. The stones move as you move and their speed is relative to your own - the faster you move, the faster they move. When you stop, they stop. Most of them in the early levels move like brainless physical objects. As you progress they take on more complex behaviours - they chase you, or block your progress, or block your access to the boosters which aid you in your task. The stones all look the same so to succeed in the game you must recognize patterns in their movement and exploit them.
Here are a few images to give you a feel for the game.
It's available for iPhone and iPads and I'd be very happy for you to try it and let me know what you think!
The role of storytelling in Data Visualization has become much discussed over the last year or so. One reason I find this aspect of Data Visualization so interesting is that my own natural tendencies are to focus on exploratory visualization. Much of my own past data visualization work is weak in the storytelling side of things. Coming from a scientific background and personally enjoying the act of discovering patterns in data means my default approach is to build exploratory tools. For me, personally, this whole storytelling aspect seems a rich area to mine in order to improve my work.
I just finished listening to the latest Data Stories podcast called Visual Storytelling which is a discussion of the topic by hosts Moritz Stefaner, Enrico Bertini, and their guests Alberto Cairo and Robert Kosara. It's an excellent conversation from a number of perspectives on the subject and I found it very stimulating. If you haven't already heard it then make sure you have a listen.
I was surprised that one aspect of the topic wasn't discussed in the podcast: storytelling techniques in data visualization can be abused to express falsehoods. One thing that is of critical importance to me in data visualization work is that it is grounded in reality - it's based on data which are, hopefully, objectively true or based on some real measurements. To be sure, there is often uncertainty involved and for some topics objectivity is difficult but still, data visualization should be about describing reality as best we can.
Like many people with an engineering, mathematical, or scientific background, I'm suspicious of salesmanship and marketing. I'm wary of other people using emotion and a good story to persuade me to believe something that isn't true. I have some concern that data visualization work that emphasizes storytelling is more likely to be 'Data Fiction' - or propaganda. The designer, through careful choice of selected facts, use of emotion, drama, conflict, and all the other techniques of storytelling can craft a message at odds with reality. The use of 'data' will even lend an air of authority to that message.
Storytelling is a powerful tool for leading a person efficiently to the main points uncovered in a dataset and can dramatically increase the impact of a work. It's very important that the story emerges from quality data and that this connection is open to inspection. Let's make sure that all our data stories are true.
Winter has finally ended in Markham where I live and it has seemed a very long and cold season this year. I decided to take a look at the weather data from Environment Canada and see whether my impression is supported by the data. The result is the graphic below. Click on it to see a larger version.
Yes, 2014 was the coldest winter in Markham since 1994. We had an average temperature during the winter of -8.2 C this year and in 1994 it was -9.2 C. Both last year and especially 2012 were warmer than usual so it likely felt that much worse in comparison. We also had the 4th most snow in the last 20 years so it was both very cold and snowy.
Toronto is the most multicultural city in the world. According to the 2011 National Household Survey, 46% of the population were foreign-born immigrants and 47% are members of a visible minority. (ref) These immigrants come from a wide variety of places across the globe and their diversity makes the city a truly remarkable place.
I have created a Dot Map that shows a single point for every person in the Toronto area, coloured by visible minority status. There are 5,700,628 in all and they are positioned at their place of residence and coloured based on the information from the 2011 census and National Household Survey. They do not depict actual individual locations but are based on the statistics over small areas.
This first image is zoomed in slightly and shows Toronto with only a few outlying areas. You can see regions of higher and lower population density as well as how the visible minorities are distributed across the city.
You can explore the map in detail with this Zoomable Dot Map of Toronto.
The section below is a close-up of the high-density string of condos along Yonge Street north of HWY 401. You can spot the blank rectangle of the cemetery to the left, the Don river valley, and commercial areas where no people reside.
The next image shows the white, predominantly Italian, area of Woodbridge with the South Asian concentration obvious to the west in Brampton.
It was created with population data from Statistics Canada and map reference data from OpenStreetMap. The OpenStreetMap data was taken from the very helpful Metro Extracts provided by Michal Migurski. The TileMill tool from MapBox was used to compose a map used to mask out non-residential areas and also the basemap underneath the dots. Custom code written with Processing was used to place the actual dots and create the final images. Thanks!
The calls people make into the 311 service line in Toronto give an interesting glimpse into the pulse of the city. The City of Toronto makes this data available through their Open Data initiative. I did some analysis and design work with it to produce a visualization for illuminating time-based patterns during 2012.
The visualization is a set of small multiple calendar heatmaps, one for each data series. The one shown above is for reports about 'long grass and weeds'. I was inspired to use this visual form by this example: Vehicles involved in fatal crashes by Nathan Yau. I experimented with a few different visual methods but this one did the best job of revealing both the seasonal and day of week patterns. I chose to use a unique colour scale for each series in order to maximize the amount of detail.
The image below shows the top 20 most common types of requests. Click on the image to load the full sized version. You can also view all the data series with an interactive version of the Toronto 311 Visualization.
This was created with Processing JS and contains information licensed under the Open Government Licence - Toronto.
One common pattern I see in many interactive applications is to support a person who is selecting a few items from some larger set. Often these items have various characteristics that the person wants to use in some way to guide their selection process. The characteristics can be numeric quantities, dates, categories, or names of things. Showing all the items in a list and allowing the person to sort by one of the attributes is often a decent default solution.
In other cases it's more useful to consider multiple attributes at a time during the selection process. Maybe you want items that are high in one attribute, low in another, and are from a particular category. Ideally the selection process should be one of exploration and successive refinement where various filtering criteria are adjusted until some small subset of items are defined and they can be investigated individually.
I have built an example of this concept which I call the Visual Book Selector. The books are directly represented with small circles and filters can be applied to progressively exclude books by various criteria. The filters are depicted visually as gates through which some of the items can pass and others cannot. The image below shows one possible configuration.
There are about 1000 books which start in the top segment of the display when no filters have been applied. In this example three of the category gates have been opened so books from those categories can pass through. The ones that don't pass this filter pile up near their closed gate which helps give some understanding of their distribution. The books that pass the first criteria encounter a second filter on the average rating of the book from Google Book reviews. This filter gate is set to only allow books having an average rating of at least 4.0 to pass through. The final gate does a pattern match on Author name and allows 4 books to the bottom which have passed all of the criteria.
The best way to get a feel for it is to try out the Visual Book Selector yourself. You can use the dropdown selectors on the left of each segment barrier to choose different criteria on which to filter. Hover over a book to see details and click on it's circle to visit the corresponding Google Books page.
The list of books and their categories comes from the 2009 article in the Guardian 1000 novels everyone must read: the definitive list. The other data was gathered from Google Books.
I should also note that an excellent solution to this multi-attribute selection/exploration problem posed here is the Elastic Lists concept by Moritz Stefaner. It supports what's called Facet Browsing and enhances it with the visualization of proportions and distributions as well as animated transitions.
Recently YouTube had a video that showed all six Star Wars movies at once. They were placed in a 2 by 3 matrix and had an audio track of all the movies superimposed. It was an interesting experiment that has since been removed based on copyright grounds. Before it was removed I was able to do some simple analysis on the video and extract some details of the individual episodes of the Star Wars series.
Basically, I produced something very similar to a classic work called Cinema Redux™ by Brendan Dawes, done in 2004. Each individual movie in the series was reduced to a collection of small snapshots taken at 1 second intervals. The snapshots are layed out 60 images per row so a row corresponds to a minute in the film. These 'fingerprint' images reveal some aspects of the film structure.
Click on any of these images to see higher resolution versions.
I used some fairly simple code in Processing to analyze the video and create the output images.
Last week the wonderful Guardian Datablog published an interesting post called Obesity worldwide: the map of the world's weight. It contains a map that shows with color the rates of obesity around the world. The accompanying chart gives data for different time frames and for both male and female which you can select and view on the map. When I saw the chart I immediately thought of a number of interesting questions that could not be easily answered with the map or chart.
Much of my past work has been driven by personal curiousity. That, together with my own background in science, have shaped my work such that most of it has been exploratory in nature. Recently I have been thinking more about the storytelling or communicative aspect of data visualization. This has been triggered by my admiration for the amazing work of the New York Times Graphics Department, and the writings of Alberto Cairo, Robert Kosara, Andy Kirk, and Jonathan Stray.
I decided try and build an interactive visualization that helped answer the questions above. I also tried to build something that explicitly highlighted some of the more interesting aspects of the data without sacrificing freeform exploration. I settled on using a Slopegraph which was first described by Edward Tufte and is featured on the cover of Cairo's excellent book The Functional Art.
This first image shows the trend for male obesity organized by continent. It's a difficult problem to show labels for so many countries along one axis so I tried to alleviate it by letting the user expand or hide countries by continent group. In this case 'North America' is expanded to show its' individual countries. Labels are only shown if they don't overlap with others. The largest countries by population are placed first.
Individual country lines can be clicked on to emphasize them with colour.
The third example shown below charts female values on the left against male values on the right in order to emphasize gender differences.
The interactive visualization includes a 'stepper' that takes the user through four different views. This helps introduce functionality gradually as well as serving to emphasize important patterns in the data.
In addition to the people and organizations mentioned above I would like to acknowledge the people behind Processing and Processing JS which was used to build the application. The code for the dashed lines comes from J David Eisenberg. Thanks!
In 2006, I started this blog as an outlet for my creative personal work as well as to gather in one place references to interesting work by other people. Since then, Neoformix has grown into a full-time business for me specializing in the development of custom data visualizations. I have just spent some time giving the website it's first facelift in 7 years. I hope you like it!
I've tried to simplify the design and emphasize that Neoformix is a business by designing a main page that highlights some projects and moving the blog to a secondary page. Thanks to Twitter Bootstrap for a powerful front-end framework which I made use of in the redesign.
About five years ago I posted a simple little application called Word Hearts which lets you fill a heart shape with words. Last year it was the most visited page on my site despite the fact that it was still a java applet based application which many modern browsers won't render. I have updated this tool to use ProcessingJS so it runs well in modern browsers. There is also enhanced functionality like:
Here are a couple of examples of what you can do:
Launch the interactive version of Word Hearts to try it out.
I have built another little digital humanities project based on the text of the 62 stories in Grimm's Fairy Tales. This one is called Grimm's Story Metrics and presents an interactive matrix of stories together with various metrics calculated from their text. You can click on a column to sort by that data, click again to reverse the direction, and click on a story name to open it in another window. The image below shows the stories sorted by the 'Royalty' metric which indicates, as you would expect, how many references there are to words related to the topic of royalty. Click on the image to go to the interactive tool.
Hovering over any of the bars shows details about that particular measurement. Most of the metrics, like 'Royalty', are based on topics and the details shown are the words characteristic of that topic used in the story. So, for example, the details for 'Royalty' in the 'Frog-Prince' are princess, prince, king, kingdom which are listed in frequency order. These topical metrics are normalized based on total words in the story so longer stories have no scoring advantage.
The 'Lexical Diversity' is a ratio of the number of unique words in the story to the total words. These stories are fairly short and you can observe a rough inverse relationship between 'Story Length' and 'Lexical Diversity'. 'Clever Hans' is an outlier in this relationship. If you examine the text for this story you'll see that there is a great deal of repitition.
Area of the words reflects frequency in the text. The top three most similar words are considered for connections with the word similarity metric defined by collocation within the text. The outer ring of words only have one weak connection to another word in the graph.
My previous post on the Grimm's Fairy Tale Network showed a graph illustrating the strongest connections between the various stories. I used a few techniques to try and prevent the usual mess of connections that often obscure the relationships of interest.
Another way of tackling graphs with lots of connections is to only show a small portion of the graph at a time and use interaction to provide navigation. This lets you browse around a complex network of nodes and relations and repeatedly get views centered on a node of interest. I've created an example of this for the Grimm's fairy tale data which I call the Grimm Fairy Tale Connection Browser.
The image below shows the connections to the story 'Little Red Riding Hood'. The larger circles are stories and the smaller ones represent key words in the collection. The inner ring shows the words and stories closely connected to the story of interest. The outer ring gives the related stories and words that are related but with less strength. You can click on any story or word to make it the new focus node. Click on the image below to launch the interactive version.
This second example shows the stories and other words highly related to the word 'wolf'. The interactive tool shows the Gutenberg version of the stories in a panel on the right. When a new story is made the central focus of the visualization the right panel shows the story text.
This was created with Processing JS.