em: kevin аτ blackistone.com
ig: @blackistone_


All Readings

Exquisite Corpus
Abstract:

The bodies of ourselves and others are most often considered through their visual surface components. The interior is typically regarded when felt within the self and rarely in regards to others. While radiological tools have dramatically improved our ability for non-invasive representation, their use is regrettably most often confined to the domain of health concerns. This work seeks to instead uncover the possibilities they represent to show the full scope of our bodily form, while obfuscating the accustomed boundary layer – thus removing tells associated with race and in some cases gender as well. Extending this dissolution of perceived identity, it excavates our inner sameness through algorithmically merging bodily interiors into 3d human chimeras beyond the possibilities of mendelian genetics. Through the collection of simple biometrics of participants, hybrid avatars are constructed from real patient data – extending beyond the surface manifold commonly regarded as the self in both physical and virtual worlds.


Machine Learning & Radiology:

Within radiology there is a large and growing body of research in the potential uses of machine learning. Among these is algorithmic training to improve the visual output quality – allowing low-dose CT imaging to minimize radiation risk, tissue differentiation, and automated identification of pathologies that might be missed by manual review, while also allowing a increased patient reviews. While these are all exciting avenues for research, care must be taken, as with all trained models, to be aware of biases and errors in the training sets, lest current misunderstanding and error become ingrained into future diagnosis.2,3


Research:

This project explored a novel cross-use of motion-based frame interpolation to construct the new chimeras on a layer-by-layer basis. While these techniques expects motion from training on video, the progression of lateral slices produces something not greatly dissimilar from temporal motion. Existing CT interpolations methods were unsuited to the needs of this project for reasons, including that they interpolates spatially, not between two sources and the available data was not appropriately formatted for those algorithms. The results from this cross-use are visibly still very much human, although with some artifacting as discussed below.


Methods:

Data sets were selected primarily for visual quality, and those either with control sets or sets lacking obvious visual pathologies. From these, samples were selected, converted into image sequences of layered slices and luminance normalized. Image sequences were manually aligned per-pairing. At each layer in the sequence a merged interpolation of the two frames was produced1 and new sequences from these inter-frames constructed.

Additionally, combined data was used in combination with segmentation software to extract models of individual organs to produce 3d prints (flexible resin). These include a brain composed of two subjects - 19yo female, and 65yo male, a heart composed of one male and one female subject (no further data), and a voice box composed of three individuals (no data).


Data Sources6:

These constructions were made of each of three body sections – the head, chest, and abdomen[6a,6b,6c]. Each of these sections was produced by it’s own data source of different individual. For the present version, each section allow 3x3 pairings for a 729 possible chimeric individuals.


Discussion:

The present workflow produces a number of visual tearing artifacts. This is produced as a result of mismatches between bodies and the layer by layer merger approach. Head samples were particularly vulnerable due to significant variance in tilt and rotation of the heads during scanning. Thus they were aligned by the center of the eyes resulting in the lower jaw fragmenting a great deal due to these variations. Reflected glare of metal tooth fillings further exacerbated this issue. Inasmuch as this is an artistic exploration and creative endeavor, the visual artifacts produce a more compelling result by preventing the output from looking too medical, while maintaining a semi-familiar reference.


Further Explorations:

It is expected that significantly improved results might be obtained by training the interpolation algorithm on the intended data. While several voxel-space 3d and 4d algorithms exist to produce super-resolution representations of radiological data, none of them are designed in a manner that might merge two samples. For data sets where segmentation data is available, this may be useful to both improve the quality of interpolation as well as aid in potential production of meshed (rather than voxel) 3d models for physical reproduction.


References

1 Fitsum Reda F, Kontkanen J, Tabellion E, Sun D, Pantofaru C, Curless B. Tensorflow 2 Implementation of "FILM: Frame Interpolation for Large Motion". GitHub, GitHub Repository, 2022, https://github.com/google-research/frame-interpolation

2 Tang, Xiaoli. “The role of artificial intelligence in medical imaging research.” BJR open vol. 2,1 20190031. 28 Nov. 2019, doi:10.1259/bjro.20190031

3 Pesapane, Filippo et al. “Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine.” European radiology experimental vol. 2,1 35. 24 Oct. 2018, doi:10.1186/s41747-018-0061-6

4 Schaefferkoetter J, Yan J, Moon S, Chan R, Ortega C, Metser U, Berlin A, Veit-Haibach P. Deep learning for whole-body medical image generation. Eur J Nucl Med Mol Imaging. 2021 Nov;48(12):3817-3826. doi: 10.1007/s00259-021-05413-0. Epub 2021 May 22. PMID: 34021779.

5 Banerjee I, Bhimireddy AR, et al. Reading Race: AI Recognizes Patient’s Racial Identity In Medical Images. Arxiv.org. 2021 Jul; https://doi.org/10.48550/arXiv.2107.10356.

6 All data was procured from various sources through the Kaggle collection of open research datasets. Only sets with clear public use licenses were selected. It should be noted, the following citations contain the information as provided for each dataset but are both incomplete and irregular by academic citation standards and should be viewed as such. Any further information may be obtained through direct contacts of data providers through the listed URLs.
A Head: Qure.AI HeadCT: Head CTs and Physician Readings from 500 patients. Provided by Chris Crawford and K Scott Mader. License: CC BY-NC-SA 4.0. https://www.kaggle.com/datasets/crawford/qureai-headcthttps://www.kaggle.com/datasets/crawford/qureai-headct
B Chest: CT Chest Segmentations. https://www.kaggle.com/datasets/polomarco/chest-ct-segmentation. From: Segmentation masks for CT scans from OSIC Pulmonary fibrosis progression Comp., Thuringia, Germany. License: CC0: Public Domain; https://www.kaggle.com/datasets/sandorkonya/ct-lung-heart-trachea-segmentation
C Abdomen: CT KIDNEY DATASET: Normal-Cyst-Tumor and Stone; MD Nazmul Islam & MD Humaion Kabir Mehedi, Dhaka, Bangladesh. License: Public with attribution. https://www.kaggle.com/datasets/nazmul0087/ct-kidney-dataset-normal-cyst-tumor-and-stone


» August 13, 2019 ... Last Update: January 20, 2020