Dissonance Imaging Machine 
 
What is plumage? Is it one entity - situated upon and around the bird? Or is it thousands of separate entities – emanating from various points, layered together uniquely. For us, the answer is: both. We understand that plumage, or tree blossom, or fog can be simultaneously a single and multiple entities visually. This kind of dissonant thinking is an issue for machine learning, and specifically, for imaging technologies. Imaging technologies rely heavily upon datasets and algorithms to make decisions in the assessment of the information in an image; dependent upon labels and distinct paradigms of processing developed from limited knowledge banks. It is therefore difficult for an imaging technology to hold the dissonant idea that the subject of the image is singular yet multiplicitous. In the example of Photogrammetry - a form of 3D scanning that creates 3D models from algorithmic processing of multiple photographs of a subject- it relies on a consensus amongst the images to reliably plot it’s ‘point cloud’ in the formation of the 3D mesh. Glitches occur due to the information of a limited dataset when attempting to image complex objects due to this dissonance. When the machine is confronted by images which display a complex array of tiny objects - the imaging technology renders three distinct glitches: 
a) Single entity: a continuous surface or object instead of a nebulous, collection of individual objects. 
b) Floating objects: warped, phantom forms and disembodied, hovering parts of the mesh which defy gravity. 
c) Empty space: Holes and negative spaces where the technology couldn’t achieve consensus. 
 
 
 
 
 
 
 
 
 
 
 
 
 
Glass/Water/Mirror: https://sketchfab.com/3d-models/glass-mirror-e84bc9c6fc144ad79f5f6e2cbb1a6cc3 
 
Almond Tree: https://sketchfab.com/3d-models/almond-tree-4bb18c99506449f08c34e5bb594cd15e 
 
The blank spaces are the objects which the technology cannot decide on. The insufficient or contradictory information from submitted images. Here, the dissonance of contradictory information is rendered, as a blank, i.e. not rendered. The negative spaces, weird warped textures and holes and spikes represent the interpretation and cognitive anomalies of the imaging algorithms. These interpretation and extrapolations rendering phantom forms are fairly common. Research has indicated, through low repeatability issues, that the objectivity of the image is problematic, as researchers found that “the software may also generate different models from the same set of photographs, i.e., if the exact same set of photographs was used two times in the software, the two resulting models will have dimensions that are different from each other, and might differ significantly from the dimensions of the structure being documented.” (Napolitano and Glisic 2018) It points out that the technology is not as transparent or objective as it seems. The ‘reality capture’ image is a computational construction which further obfuscates the mediation of its assembly, in a way that is not alluded to in the ‘capturing’ rhetoric. Nor has discourse surrounding it sufficiently confronted the issues that 3D scanning encounters with temporality and visuality. In many ways, it is a time-based practice that produces still images. Hito Steyerl hints at the problematics of these images, that they are indeterminately photographic, algorithmic or 3-dimensional in their nature. They occupy a strange purgatory between 2D and 3D which she titles “2.4D”. I would go further and state that they are not one entity, but are a composite of multiple visualities that mimic the linear, photographic visual reality. Composite spatial images are a strange Frankenstein’s monster of photography fed into machine learning. They are layered - a sectional strata of collaged visualities. Made up of: 1) an estimated 3D ‘mesh’, 2) a collaged ‘image texture’, 3) a computational transplane environment, in which the mesh and texture operate together. 
 
 
 
 
.  
 
 
. 
 
 
 
 
 
The 3D models’ ‘meshes’ are constructed from the information garnered through an algorithmic understanding of multiple 2D photographs, interpreting their overlaps to ascertain spatial information to construct a 3D ‘point cloud’. The mesh constructed from this point cloud is then ‘dressed’ in a collage of all photographic images on top to create a visual ‘skin’ or texture. This composite environment works well when depicting simple, visually distinctive objects. However, when the technology is used to capture complex optical phenomena, its processes begin to unravel. Objects and environments which are: 
reflective 
specular 
repetitive 
moving 
indistinct 
nebulous 
or 
vast 
cause problems for the visualising algorithm and produce glitches; glitches that reveal the limitations of its understanding of spatiality. What is created is not technology revealing space but a space revealing the technology. The motionless, constructed images display the foundations for their construction with the image: 
The unknown, blank spaces. The empty voids, holes and omitted spots. The edges and ends of the model. These borders between the rendered and invisible are the edges of the algorithms understanding. The very limits of its comprehension literally visualised. Folded surfaces and the flatness of imagery. The uncanny composite nature of their being. Evolving like the beetles and birds after leaving the machine. The scaffold-like mesh. Constructed with biases based on the inputted imagery. These structures bely the real internal structures of the objects they represent. The multiple perspectives present in the textures. Formed from the individual, flat photos of 3D environments. Collaged and stitched together. Folded, stretched and warped to fit around a digital 3D object 
The 3D models are the peculiar animals of the Preserving Machine – simultaneously uncanny and uncertain. However, the glitches from the captured ephemera are the glimpses of the “deep, impersonal force” of the machine that Labyrinth could not see. They are a revelation of the ways in which the machine operates and mediates. The most delicate, temporal and ephemeral entities are the most difficult to preserve. But what is produced isn’t a preserved artifact, it is a new, evolved beast. 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Our site uses cookies. For more information, see our cookie policy. Accept cookies and close
Reject cookies Manage settings