Update on the Hoa Hakananai’a Statue
In 2012 ACRG members, James Miles and Hembo Pagi, completed a series of RTI captures and a photogrammetry model of the Easter Island Statue, Hoa Hakananai’a, which is currently housed in the British Museum. Since then, in collaboration with Mike Pitts, we have examined the results of these RTI files and compared them with the photogrammetry model. A brief discussion of this work can be seen in a previous blog post. The methodology that we utilised allowed for a full analysis of the statue that has previously been impossible. It allowed us to examine the RTI files in fine detail through the changing surface detail identified through the raking light and rendering modes. Where we thought we had identified something important we could then see if it existed in the 3D model. This comparison then allowed for a combination of the subtle 2D differences in the RTI to be mapped and compared to the 3D surface differences in the virtual replica of the model. Through this we were able to clear up some of the often ambiguous interpretations of the petroglyphs. More on our results and methodological approach can be seen in our recently published Antiquity paper, our Antiquaries Journal paper, Mike’s paper in the Rapa Nui Journal and our soon to be published paper in the Proceedings of the 41st Computer Applications and Quantitative Methods in Archaeology Conference. The work has also gained a lot of public interest and our research features in recent online publications which are a Google search away.
Since these publications I have been working on ways, as part of our initial research aims, at producing online versions of the datasets, where users can view and manipulate the records that we have. This will allow different people, with different backgrounds to come to their own conclusions. Part of this blog post is to make available for the first time, the RTI files that were produced within our investigations. At Southampton we are working on creating a newer and better online RTI viewer, but at the minute we are left with having to produce low resolution datasets for online use. The following then is a severely reduced RTI dataset but the results are still evident none the less. The RTI files have been separated into five different sections, one of the front and four of the back (with a slight overlap) to allow for a greater understanding.
In Van Tilburg’s recent response to Mike’s paper (in the same Rapa Nui Journal), she states that “The major issue with regard to PTM (RTI), is that to advance a thesis of interpretation and avoid bias one must allow review of all of the images produced, not just selected ones that support a given point of view.” I would just like to clarify this statement as it seems that the purpose of RTI has been overlooked. Each of the five files contained between 57–87 images as gathering anymore than 90 would be counter-intuitive through the way in which the individual images are processed. With Van Tilburg’s understanding it is clear to see that there has been some confusion as to how an RTI is produced. Rather than examine each individual image, the RTI builder combines all of the static images and merges them together into a file format that allows for the virtual movement of a light source. This then moves away from the need to examine each image, which I agree, could provide incorrect conclusions. Instead it allows for a greater understanding and greater depth of investigation through the combination of these different static images and movement of virtual light, as it removes the ambiguous and problematic context of previous investigations of the statue. Van Tilburg also negates to mention the use of our virtual model (which was based on 150 images) within our interpretations as she wrote her article before ours were fully published. Going from the RTI files to the 3D model is something that has never been done before in reference to this statue and her argument of using selected views is wrong and short-sighted. She has made derogatory remarks regarding our work without viewing the entirety of our research. This is something that needs to be corrected and so also included within this blog post are updated versions of our photogrammetry model through different online viewers. This will then allow you to make a fuller and more complete interpretation as you too can also go between the two different datasets that we have used and come to your own conclusions.
Although I may not have a full understanding of Rapa Nui archaeology, I do have a fairly high expertise in the digital technology that was used within our investigation. I can say with a high certainty that the results shown are the most accurate and most complete investigation ever completed on the Hoa Hakananai’a statue. I spent many months going through these datasets to find hard to see details, making sure that any results were checked many times. I have used theses technologies on a range of different items, from prehistoric through to Victorian times, from small to large objects, and I have been lucky enough to work all over the world doing this. The technology stands firm in every instance and provides clearer results than previous methodological approaches. It is therefore hoped that with the inclusion of the original files that this too will provide a further insight and clarification to the full published record that we have presented over the last year. We will discuss our results in yet further in the new television series “Treasures Decoded” which will be broadcast September 10th on More4 (in the UK) and via the History Channel. In the meantime please carefully view the results shown below and please do get in contact with us if you see anything that we have missed!
The following RTI files are of the same resolution as those used within our investigation. Each of the following links are of the separate RTI files and are accessible through a web RTI viewer produced by the Visual Computing Lab in Pisa, Italy.
For our highest resolution online model please visit this custom made website. Most online viewers limit the number of faces that can be included but the viewer (Based on 3Dhop) that I built loads partial amounts of the model based on your internet connection.