Tuesday, April 12, 2016

Virtual Reality: iPhone or Microwave?

It's fair to say that the iPhone, and all the smart phones that followed, have revolutionized our lives.  Microwaves haven't.  I remember when these ovens first became common, my Mum cooked a microwave cake using a temperature probe following a complex recipe involving a temperature probe.  The cake was a flaccid, pale disappointment.  Everyone soon learnt that microwaves weren't going to replace ovens or hobs - they were good for heating up last nights stir fry, doing baked potatoes quickly and pretty much nothing else.

It's clear that there is a lot of media noise about VR at the moment driven by the release of Oculus Rift and the lower spec Google Cardboard.   Column inches are no guarantee of success, so we should be asking will VR be a Microwave or an iPhone technology?  Will it rocket in popularity or fail to impress for the second time?  My vote is for 'meh' rather than 'yay!' and I'll try and persuade you of my point of view by a bit of deconstruction:

Tunnel Vision: To understand what VR does and doesn't offer I need to digress into explaining a bit about your visual system.  Look slightly to the right of the text on whatever device you're reading this on.  Despite being able to see paragraphs and lines you'll find you can no longer read the words.   That's because your vision is made up of a very sensitive zone (the fovea) which takes up half of the nerves that link your eye to your brain.  Around this sensitive centre is a less responsive zone.  You couldn't read the text when not looking at it because of the lack of visual processing power in this outer zone.  Although this part of the eye is less sensitive, it is good at detecting movement; you can prove this by another little experiment - pick something moving in your visual field like a tree in the wind, look away by 60 degrees or so.  In your peripheral vision you should notice the moving branches but will not really 'see' the trunk of the tree.  So your eye really does work like it has a low level of tunnel vision.

The final part of the visual system I want to describe to you is eye movements.  To keep track of what is going on around us (is this lion I see stalking through the grass about to eat us?) our eyes flit around moving the fovea rapidly from place to place in order to track the important things (lion) whilst ignoring other less important objects (grass).  These movements are known as saccades and your brain is so good at processing the patches of high density information that you gain from them that you are largely unaware of your eye movements.  As a result, you have the sense that you are looking everywhere at once despite the fact that you aren't - you're actually sampling the space in high resolution patch by patch and tracking movement everywhere.  This video is a lovely illustration of that fact:




What does VR add?  When we use VR the goggles cover our whole visual field,  not just part of it.  However, when we go on a virtual field trip or watch a film on a non-VR device our eyes direct our fovea to what is being shown on the tablet.  A video of our eyes would show them flicking from place to place in rapid saccades scanning the screen for the most important thing to look at.  The fact that our peripheral vision is looking at the bedroom, bus or library that surrounds the tablet doesn't really matter because we are processing the information we need in our fovea just fine.  So I'm suspicious that VR doesn't really add that much to the information we can gather from a virtual field trip when compared to the same content delivered on, say, a laptop screen.

Immersion:  But gathering information isn't the only benefit that VR is said to produce, it's also said to be immersive.  By this people mean that it produces the feeling that you are actually in the place depicted.  In a recent radio 4 program an example was given where VR goggles were used in a lab to show a full vision simulation of the same lab.  Then the VR floor opens up before the viewer and they are asked to step into the hole - a challenge to the part of their brain that knows what they are seeing is not real to overcome the part that really thinks the Goggles are showing the truth.  Users explained just how compelling they found the illusion and that they were convinced of the power of VR as a result.

I'd raise the question, what about when they get used to seeing 180 surround vision?  Will they still be fooled the 10th time they are asked to step into the hole?  I'd predict that they won't just as they weren't compelled by the magic of the 10th place they'd looked at in Google Earth as much as they were by the first (which was, of course, the roof of their house).  So I'd argue that immersion is the novelty of a new medium that is closer to reality than the media you're used to and that the novelty wears thin quickly.  Lasting impressions are due to quality content rather than the media: reading the words that make up Hamlet is an immersive experience.

So what is VR good for?  I've clearly argued that VR isn't going to be an iPhone technology that dramatically changes the way we live.  However, predicting how a technology is going to develop is clearly foolish - crystal balls don't work.  I tend to think it is more like the microwave, important without being key but, having said that, its impact could be somewhere between the two.  I do predict its going to revolutionize gaming - immersion is such a strong draw in this case.  I also think there are some educational applications for VR in situations where you have to see the wide picture before homing in on detail; examples would be paramedics presented with a crash scene having to triage which patients to treat first and geologists being presented with a cliff section having to find a certain small scale geological feature.  There could be a 'killer app' use we haven't foreseen but I'm not convinced: as a technology it doesn't add to the content because we 'see' mostly through our fovea not the outer zone of our retinas and the immersion effect will only last as long as the novelty does.


Saturday, March 19, 2016

Tracking students in Google Earth

Our paper 'Footprints in the sky: using student track logs from a 'bird's eye view' virtual field trip to enhance learning' has been published.  It describes how students were tracked zooming and panning around Google Earth on a virtual field trip.  Their movements were recorded and their visual attention inferred as a paint spray map: high attention = hot colors, track = blue line.

A paint spray map of 7 students (1-7) performing a search task in Google Earth.
Background imagery has been removed to aid clarity.
Click to expand.

How it works
The idea is to track students performing a search task, in our experiment they looked for evidence of an ancient lake that has now dried up in a study area.  Their 3D track as they zoom and pan around in Google Earth is recorded, their visual attention is mapped as if it is a can of paint spraying:  if they zoom in to check an area in close up, Visual Attention (VA) builds up, if they zoom out VA still builds up but is spread over a much larger area.

Mapping the accumulation of VA  along with their track projected onto the ground (blue line) shows where the students have searched and in what detail at a glance.  The small multiples above show data from 7 students who were given 3 set areas to investigate in further detail (target/guide polygons).  This was done in Google Earth but to aid visability, the Google Earth base map has been removed.  From the maps we can predict what the students were doing, e.g. student g5 didn't appear to visit the top right guide polygon at all and students g1, g3 and g6 only gave it a cursory look.  By comparison, students g2 and g4 explored it much more thoroughly.

How it could be used
The idea would be to give the maps to students to help them assess how they did on the exercise.  In addition the VA from all students can be collated which can be used by the tutor to see if his/her activity worked well or not (bottom right of the multiples above).  In this case the summed VA shows that students examined the areas they were supposed, that is, within the target/guide polygons.

The system only works with a zoom and pan navigation system where the zoom function is needed to explore properly.  If the exercise can be solved just by panning, a paint spray map won't show much variation in VA and interpretation would be difficult to impossible.


Other Related Work
Learning Analytics is a growing area of investigation, there's lots of work tracking student's logs using VLEs (LMS in US) to understand their learning.  There has also been use of tracking to see where avatars have moved in virtual environments, visualizing it as a 'residence time' map similar to the VA maps above.  However, this is the first attempt we've come across where movement in 3D virtual environment via zoom and pan has been tracked and visualized.




Monday, February 29, 2016

Three Types of Google Earth Tour

I happen to have been thinking about the different types of Google Earth tour recently.  I've come up with three main types:

3D Flyover:
This type uses just camera motion and is through an area of significant topography (think mountain range) or other 3D structure (think buildings or Geology).  Its immersive in the sense that it is close to flying through the actual landscape presented.  Here's an example:

http://www.nps.gov/grca/learn/photosmultimedia/fly-through.htm

I think this type is a bit old, people were very excited by them when they were first possible but now we've all grown used to Google Earth they don't impress that much anymore.

Map Tour:
In this type the viewer is flown from location to location with other media being used e.g. photos or overlays on the topography.  It may use other map animations such as time animations but these are more minor.  It doesn't really try to be immersive, the power of the camera movement is to explain the relative locations of things or to illustrate maps over two or more scales.  A couple of examples:






Time tour:
This final type is more an animated map than a tour, it is mainly time animation with camera motion being a less significant animation type.  Like the map tour, it doesn't aim for an immersive experience but instead uses Google Earth as a base map on which to present thematic data over time.  A good example is this sea ice animation from NSIDC:




Friday, January 29, 2016

Microsoft Mix - first look

Warning:  nothing about maps, just of interest to educators.

So I've come across office mix this week, its an add on for MS Powerpoint with a linked cloud hosting service.  I've had a play and I think its a definite force to be reckoned with.  It does the clever trick of combining:

  • Easy audio and slides creation
  • Easy written materials with links and video embed creation
  • Self assessment quizzes 
  • Polling 
  • Easy cloud management (no ed tech help needed!)
  • good tutorials 
  • AND you get learner analytics 

The analytics is a big plus, you can see if a certain student has accessed your mix or gone only half way through. You can also see if most students skipped a slide and see if it was good or not.

The only downsides are: -

  • Functionality for students on mobile devices (tested on iPad/iPhone) is reduced a lot - you have to convert to a movie and you lose interactivity
  • I’ve found it a bit ropey in places e.g. fails to upload once in a while, analytics behaving oddly at times.
Some ideas for what you could use it for:
  • Recording skype tutorials for those who can't make it
  • Create some screencasts of Powerpoint content and then have a quick feedback survey at the end to gauge how your students found them
  • Give feedback on essays by screencasting you looking at their essay and adding ink onto their scripts
very much worth looking at.

Wednesday, May 20, 2015

Ten new Google Geo tools for the Classroom

Googler John Bailey (Program Manger for Geo Edu) recently did a talk for Google Education on Air on Google's Geo tools:



Being able to tilt the view over a crater in the Moon bought to mind a teacher quote in one of my sessions last year
"you just made me fall in love with Geography again"
I had to tear myself away...  Anyway, I thought I'd point you at my favorite ten new* examples of tools/content that John showcased:

1] 7:10m Distance: measure distance tool in Google Maps

2] 7:40m Area: that it also measures area in Google Maps

3] 8:27m Carousel: geolocated photos in Google Maps taken by users uploaded to google by users

4] 9:20m Tilt: how to tilt to see 3D Google Earth like pictures using tilt button bottom left Pisa location used: 10:05 Globe View: zoom out to globe view which will rotate which click and dragged

6] 11:08m Mars and Moon View: zoom out to full extent and now you rotate around the globe when clicking and dragging and can access mars and the moon.

7] 11:19m Two Map system: compare and contrast maps using geteach.com 

8] 38:25m Streetview historical imagery: see street view before and after the Japanese tsunami on Google Maps (location near the site with historic street view available).

9] 43:42m Tour Builder

10] 47:28m Time Lapse using Google Earth Engine. 48:25 Great moment showing Peruvian river meander dynamically.

*Actually some of them are new-ish rather than new

Wednesday, March 25, 2015

Creating static images in Google maps

Getting images into Google docs and then into Google My Maps


Monday, November 10, 2014

Cloud Mapping Compared

So after coming back from the NACIS conference I've been looking at cloud mapping again.  At the conference Mapbox, CartoDB and Leaflet and ArcGIS online were getting a lot of mentions.  Compare that to searches from Google trends:


My interpretation is that:

  • The non-experts are using/interested in ArcGIS online or Google Maps Engine
  • The experts are interested in the others.
  • Cloud mapping is on the up (as far as search terms go anyhow)