Polyp Shape Recovery from Single Endoscope Image using Medical Suture
RESEARCH ARTICLE

Polyp Shape Recovery from Single Endoscope Image using Medical Suture

The Open Bioinformatics Journal 31 Jan 2019 RESEARCH ARTICLE DOI: 10.2174/1875036201912010001

Abstract

Background:

Polyp shapes play an important role in colorectal diagnosis. However, endoscopy images are usually composed of nonrigid objects such as a polyp. Hence, it is challenging for polyp shape recovery. It is demanded to establish a support system of the colorectal diagnosis system based on polyp shape.

Introduction:

Shape from Shading (SFS) is one valuable approach based on photoclinometry for polyp shape recovery. SFS and endoscope image are compatible on the first sight, but there are constraints for applying SFS to endoscope image. Those approaches need some parameters like a depth from the endoscope lens to the surface, and surface reflectance factor . Furthermore, those approaches assume the whole surface which has the same value of for the Lambertian surface.

Methods:

This paper contributes to mitigating constraint for applying SFS to the endoscope image based on a cue from the medical structure. An extracted medical suture is used to estimate parameters, and a method of polyp shape recovery method is proposed using both geometric and photometric constraint equations. Notably, the proposed method realizes polyp shape recovery from a single endoscope image.

Results:

From experiments it was confirmed that the approximate polyp model shape was recovered and the proposed method recovered absolute size and shape of polyp using medical suture information and obtained parameters from a single endoscope image.

Conclusion:

This paper proposed a polyp shape recovery method which mitigated the constraint for applying SFS to the endoscope image using the medical suture. Notably, the proposed method realized polyp shape recovery from a single endoscope image without generating uniform Lambertian reflectance.

Keywords: Shape from shading, Lambertian surface, Specular removal, RANSAC, Medical endoscope, Colorectal diagnosis.

1. INTRODUCTION

The endoscope is a medical equipment for diagnosing colorectal polyps. It is used to detect polyps of all sizes, and it allows biopsy of lesions and resection of most polyps [1, 2], and it is considered nowadays as the gold standard for colon screening [3, 4]. There are still open challenges to overcome such as the reduction of the missing rate [5-7]. A polyp guideline for the diagnosis, treatment, and surveillance for patients with colorectal polyps published initially been in 1993 [8] has developed following evidence-based [9-11]. The guideline indicates that the shape and size of a colonic polyp are biomarkers that correlate with its risk of malignancy and guides its clinical management.

The colonic polyp has various shapes depending on its medical condition [12]. Histologically, polyps are classified as neoplastic (adenomas) or nonneoplastic [13, 14]. Neoplastic polyps or adenomas have malignant potential and are classified according to the World Health Organization as tubular, tubulovillous, or villous adenomas, depending on the presence and volume of villous tissue [15]. Increasing dysplasia and, presumably, malignant potential correlate with increasing adenoma size, villous component, and patient age [16] and the likelihood of invasive carcinoma also increase with increasing polyp size [14]. From this perspective of polyp shape and size, the accuracy of polyp measurement is critical for accurate diagnosis [17, 18].

Most small polyps are adenomas with some malignant potential although the likelihood of cancer already existing in a polyp is small (<1%) [19, 20]. Furthermore, hyperplastic polyps, which are common in the lower left colon, have no malignant potential, and several prospective studies have shown that they do not predict an increased prevalence of adenomas in the proximal colon [21-23].

While, advanced adenomas have some characteristics: villous tissue, larger (1cm) or appreciable high-grade dysplasia [24]. In a case of the sizeable sessile polyp (>2 cm), it usually contains villous tissue with high malignant potential and tends to recur locally after resection [25], and as the view of the polyp shape, small flat colorectal adenomas are with a purportedly high malignant potential [26].

Computer-Aided Diagnosis (CAD) has been a rapidly growing not only colorectal polyp diagnosis but also various active medical fields of in medical imaging [27, 28]. Polyp detection and classification techniques are mainly developed as CAD for colonoscopy.

Development of these technologies is also underway competition based which provides datasets [29]. As polyp classification techniques, some techniques [30-32] have been developed using a peculiar endoscopic image technology represented by Narrow Band Imaging (NBI) [33, 34] which emphasizes blood vessels. As the polyp detection techniques, various techniques have been developed [35-42]. Especially, some approaches put polyp detection into practice based on shape features [43-45], texture features [46, 47], texture & shape features [48]. Polyp shapes are also important for developing detection technologies.

Hence, it is demanded to establish a supporting colorectal diagnosis system based on polyp shape from the view of the above descriptions.

In computer vision, the techniques to recover shape are called shape from X techniques where X is a key as shading [49-55], stereo [56, 57], motion [58-60], texture [61-63], shadows [64, 65], specularities [66-68], silhouette [69-71], defocus [72-78] etc. Shape from stereo requires a camera which has multiple lenses. However, general endoscopy has a single lens and hardware-based techniques [79-81] requires peculiar endoscope for polyp shape recovery.

There are software techniques to infer the 3D structure of the colon from video sequences such as computational stereo [82], simultaneous localization, mapping [83] and shape from motion [84]. General shape from motion, shadows, specularities assume recovering objects with a rigid structure. However, endoscopy images usually consist of nonrigid objects such as a polyp, and shape from texture requires known texture for the cue of shape recovery so that applying photoclinometry based recovery approach is reasonable for the issue mentioned earlier.

Shape from Shading (SFS) [49] is one valuable approach based on photoclinometry. Since the first SFS technique was developed by Horn in the early 1970s [49], various approaches have been developed [50-55]. SFS uses the intensity of image to recover the surface shape from a single image. Some approaches [85-97] recovered 3D shape from endoscope image based on SFS. Paper [90] proposed a polyp recovery approach using both photometric and geometric constraints assuming one light source endoscope. Another approach [94] recovered polyp shape considering more actual endoscope which has two light sources where the neural network was used to modify the obtained surface gradients. Paper [85] proposed a 3D shape recovery method for an artificial spine except for polyp shape as a target object.

SFS and endoscope image are compatible on the first sight, but there are constraints for applying SFS to endoscope image. Those approaches need some parameters like a depth from the endoscope lens to the surface, and surface reflectance factor . Furthermore, those approaches assume a uniform Lambertian surface with a constant reflectance factor .

This paper contributes to mitigating constraint for applying SFS to the endoscope image based on a cue from the medical structure. An extracted medical suture is used to estimate parameters, and a polyp shape recovery method is proposed using both geometric and photometric constraint equations. It is noted that the proposed method realizes polyp shape recovery from a single endoscope image.

The outline of this paper is as follows. First, the algorithm section introduces an observation system and photometric constraints and describes preprocessing and obtaining parameters approaches for the polyp shape recovery. The experiment section conducted shape recovery using model images and actual endoscope images for the evaluation of the proposed method. Finally, the conclusion section discusses the proposed method with future works.

2. ALGORITHM

The proposed approach assumes an observation system of two light sources endoscope and consists of following steps.

Step 1. Estimate the camera parameters by conducting camera calibration.

Step 2. Conduct specular highlight removal.

Step 3. Generate a Lambertian image for both medical suture region and intestinal-inner-walls.

Step 4. Conduct medical suture segmentation.

Step 5. Obtain depth Z and reflectance factor C using the border between medical sutures and intestinal-inner-walls.

2.1. Observation System of Two Light Sources Endoscope

Some endoscopes used in medical diagnosis have two light sources.

Proposed method assumes an observation system which has a point light source illumination and perspective projection. Fig. (1) shows the observation system of the endoscope with two light sources.

Fig. (1). Observation system of two light sources endoscope.

Here, let the coordinate of the center of the lens be (0, 0, 0), let both light source coordinates be s1 = (a, b, 0) and s2 = (c, d, 0), let f be focal length, let each distance from light source to surface be l1 and l2. Let n be the normal surface vector. Image intensity E and reflectance factor C can be denoted as Eq.(1) following inverse square law of illuminance.

(1)

Eq.(2) represents normal vector n on an arbitrary point of diffuse reflectance surface.

(2)

Here, let p, q be a surface gradient parameters defined by Eq.(3).

(3)

Let arbitrary surface point be (X, Y, Z), then, Eq.(4) represents l1 and Eq.(5) denotes l2.

(4)

Eq. (6) and Eq. (7) represent each light source vector s1 and s2 on an arbitrary point of diffuse reflectance surface. Here, let a, b, c and d be each light source coordinates.

(5)
(6)
(7)

Eq. (8) represents world coordinate of each X, Y, respectively based on the perspective projection.

(8)

Substituting Eq.(2) to Eq.(8) into Eq.(1), Eq.(9) gives E using C and Z.

(9)

The proposed method recovers Z distribution from endoscope images postulated in the observation model. This Eq.(9) is used to solve each C of the medical suture and the intestinal-inner-wall (based on Eq.(16) and Eq.(17)).

Fig. (2). Examples of checker board.

2.2. Camera Calibration

Camera parameters of the endoscope are estimated. Estimating camera parameters of the endoscope is performed using multiple images of checkerboard taken by an endoscope with the existing camera calibration techniques [98, 99]. Fig. (2) shows examples of checkerboard image captured by an endoscope.

Furthermore, a sphere was taken by the endoscope as a known object to estimate each of two light sources coordinates of endoscope s1 = (a, b, 0) and s2 = (c, d, 0).

2.3. Specular Highlight Removal

The product of the spectral energy distribution of the illumination and the surface reflectance represents the spectral energy distribution of the light reflected from the surface. Following the dichromatic reflection model [100], two components of specular and diffuse reflection compose the reflected light. Specular components affect the result of shape recovery under SFS approach and frequently arise on the endoscope image. Paper [101] proposed a method to remove the specular components on endoscope image by conducting the inpainting process to specular components. Modified region by inpainting process lacks the surface information.

The diffuse color component has a characteristic that the maximum fraction of the diffuse color component in diffuse local patches in color images changes smoothly. Hence, specular components can be removed by following this characteristic. Furthermore, the proposed approach obtains depth Z and reflectance factor C based using the border between medical sutures and intestinal-inner-walls. Specular components removal process should be done without removing the border between the medical suture and the intestinal-inner-walls due to the calculation of each reflectance factor C based on the border information. The proposed approach removes specular components by introducing the bilateral filter based on methods [102, 103].

2.4. Generating Lambertian Image

Many approaches have been proposed for the shape recovery assuming Lambertian reflectance as SFS problem. The proposed approach also assumes Lambertian reflectance for shape recovery. Therefore, generating a Lambertian image is conducted for both medical suture region and intestinal-inner-walls.

As generating a Lambertian image method, Paper [104] tries to generate a diffuse component image by removing the specular component by taking images of an object from multiple viewpoints. Paper [105] developed a method to convert the original image to Lambertian image using object rotation. Endoscope environment cannot use the polarizing plate, object rotation and multiple images with different viewpoints for the problem of both camera motion and peristaltic movement of an object inside the body.

So, this paper generates a uniform diffused reflectance surface from the original RGB image for the endoscope image based on method [106]. The approach uses the color clustering with the scalar quantization in RGB space and to treat the same reflectance factor for two neighboring points which locate along the boundary between the different color reflectance factors.

Lambertian image for both medical suture region and intestinal-inner-walls is generated by absorbing the difference of relative reflectance factor and performing near the uniform Lambertian reflectance image for different color reflectance regions of the original image.

2.5. Obtaining Depth Z and Reflectance Factor C

The depth parameter Z and reflectance factor C for shape recovery under SFS approach are obtained by estimating the horizontal plane of medical suture locally considering a horizontal plane in the observation system.

Fig. (3). Horizontal plane of columnar against lens.

2.5.1. Estimation of The Horizontal Plane

The horizontal planes of columnar forms against lens are derived from assuming the continuity of width from columnar centerline to both ends of the edge. The columnar width cropped by the horizontal plane against lens continues while the cropped region is horizontal against lens as shown in Fig. (3).

The horizontal planes of the medical suture can be obtained locally based on this property. The procedure of obtaining the horizontal plane is as follows.

Step 1. Extract medical suture region in Fig. (5) from the original image in Fig. (4).

Fig. (4). Original image of medical suture.
Fig. (5). Example of extracted medical suture region.
Fig. (6). Example of line thinning processing.

Step 2. Extract centerline of the medical suture by performing line thinning processing as shown in Fig. (6).

Step 3. Extract edge of medical suture using morphology operation as shown in Fig. (7).

Fig. (7). Example of edge extraction.

Step 4. Draw an orthogonal line to the centerline and crop the line by both ends of the edge. Finally, extract continuous regions where the cropped line has the same width as shown in Fig. (8).

Fig. (8). Example of estimation of horizontal plane.

2.5.2. Estimation of Medical Suture Z and C

Fig. (9) shows an observation model for obtaining depth parameter Z of the medical suture using its horizontal plane.

The depth Z from the endoscope lens can be calculated using the model concerning the estimated horizontal plane of the medical suture. The procedure for calculating parameter Z is as follows.

From Δ LOIi ~ Δ LSoSi and LSiSc is an external angle of Δ LSoSi, LSiSc is derived from Eq.(10).

(10)

From Δ LOIc ~ Δ LSoSc from Eq.(11), LScSi can be expressed by Eq.(12).

(11)
(12)

Similarly, LScPi is derived from Eq.(13).

(13)
Fig. (9). Observation system of horizontal plane.

The hypotenuse from the lens L to the center of the suture center Sc in Δ LScPi, the distance LSc can be obtained from Eq.(14). Here, the distance PcSc is the same as the radius of the medical suture.

(14)

From Δ LZPc ~ Δ LOIc, the depth Z is derived from Eq.(15).

(15)

Finally, define F(p, q, a, b, c, d, f, Z) as shown in Eq.(16) using Eq.(9), then, the surface reflectance factor C of intestinal-inner-wall is estimated by optimization using RANSAC [107] and Eq.(17). Here, let p and q be surface gradient parameters, let each light source coordinate be a, b, c, d, let f be the focal length of the lens, let Z be the calculated depth and let Ei be each observed intensity. Besides, let each p and q of intestinal-inner-walls at the neighboring points obtained for the horizontal partitions of the medical suture be zero.

(16)

(17)
Table 1.
Result of estimation.
Parameters Result of Calibration
Focal Length (pixels) [718.7447 ± 0.8387, 718.3827 ± 0.8654]
Principal Point (pixels) [879.0439 ± 0.4669, 533.5813 ± 0.4240]
Radial Distortion [-0.3913 ± -0.0010, 0.1178 ± 0.0008]

3. EXPERIMENT

Experiments were conducted using colon model images and real endoscope images. A medical suture whose diameter is 0.8 mm. Inner parameters of endoscope were obtained from camera calibration. Two light sources coordinates were estimated using a known sphere object with 66 mm diameter.

3.1. Camera Calibration

Table 1 shows the result of camera calibration. Fundamental parameters of the endoscope were obtained by performing camera calibration such as focal length, principal point, and radial distortion. Here, two parameters were obtained respectively based on the aspect ratio of the observed image.

Positions of two light sources on endoscope were estimated using sphere object image as s1 = (a: -5.001 mm, b: 1.996 mm, 0 mm) and s2 = (a: 5.001 mm, b: 1.996 mm, 0 mm).

3.2. Experiment with Model Images

3.2.1. Results of Estimated Depth Z for Model Images

Estimated results for the horizontal plane of the medical suture are as shown in Figs. (10-15), respectively. Here, horizontal plane sections where the same width continues more than 6 pixels tracing the center line of medical suture were adopted as the candidate for a section of the horizontal plane.

Fig. (10). Original image in scene 1.
Fig. (11). Horizontal plane in scene 1.
Fig. (12). Original image in scene 2.

Table 2 shows Estimated MEAN and STD of both Z and C of each scene. Here, each section in each scene means an obtained section of the horizontal plane where the same width continues more than 6 pixels.

From these results, the horizontal section of the medical suture could be obtained within the nm level variation of depth Z in each scene.

Fig. (13). Horizontal plane in scene 2.
3.2.2. Results of Shape Recovery

Polyp shape recovery was performed using calculated Z and C. Here, reflectance factor was estimated by the following process. C is estimated in order from the section where the continuous section of the same width region is long as well as with Z less STD is obtained. MEDIAN of C with the low value of STD was selected. Moreover, Smirnov-Grubbs test [108] was applied for outliers where the region has a potential of containing intestinal-inner-wall except for blood vessel. A polyp model whose diameter is about 10mm was used, and some regions were cropped where shape recovery is interfered by a hood cover of the endoscope. Figs. (16-18) show the results of recovered shapes of polyps. From recovered results, it can be confirmed that each polyp shape was recovered with around 10 mm, i.e., the proposed method recovered approximate shape and size using the calculated parameters Z and C.

Table 2.
Result of estimated Z.
Scene Section Estimated Z [mm] Estimated C
MEAN STD MEAN STD
1 1 23.8076 0.0000 816025.5 270.335
2 23.4538 0.0000 808439.7 1618.93
3 23.4092 0.0224 809689.3 262.500
4 23.2639 0.0000 804883.0 1840.40
5 23.0954 0.0000 797798.0 294.000
6 22.3189 0.0000 780459.5 583.607
7 22.1798 0.0000 777842.3 600.260
8 22.0867 0.0000 775345.3 397.500
9 21.9934 0.0000 773461.0 406.000
10 21.6174 0.0000 765179.5 310.420
11 21.2372 0.0000 758073.0 329.512
2 1 23.1742 0.0225 800335.5 1120.31
2 23.0615 0.0226 797565.0 1086.58
3 21.0095 0.0241 754426.3 1227.83
4 20.6835 0.0280 748430.8 1333.70
5 20.5377 0.0281 746197.8 1345.40
6 20.3790 0.0245 743103.5 1271.01
3 1 22.9145 0.0000 668735.5 4694.48
2 22.3189 0.0000 647800.0 0.00000
3 22.1333 0.0000 619218.0 10523.2
4 21.7118 0.0000 714005.0 0.00000
5 21.4753 0.0000 706358.5 8857.93
6 20.4157 0.0000 743463.3 583.500
Fig. (14). Original image in scene 3.
Fig. (15). Horizontal plane in scene 3.
Fig. (16). Recovered shape in scene 1.
Fig. (17). Recovered shape in scene 2.

3.3. Experiment with Real Endoscope Images

3.3.1. Results of Estimated Depth Z

Estimated results of the horizontal plane for the medical suture estimation are as shown in Figs. (19-26), respectively. Here, the horizontal plane where the same width continues more than 6 pixels tracing the center line of medical suture were adopted as horizontal plane sections.

Table 3 shows estimated MEAN and STD of both Z and C in each scene.

Fig. (18). Recovered shape in scene 3.
Fig. (19). Original image in scene 1.
Fig. (20). Horizontal plane in scene 1.
Fig. (21). Original image in scene 2.
Fig. (22). Horizontal plane in scene 2.
Fig. (23). Original image in scene 3.
Fig. (24). Horizontal plane in scene 3.
Fig. (25). Original image in scene 4.
Fig. (26). Horizontal plane in scene 4.

From these results, horizontal sections of the medical suture were obtained within the nm level variation of depth Z in each scene. These results suggest that the horizontal plane of the medical suture and depth Z was obtained with high accuracy in each endoscope image.

Table 3.
Result of estimated Z.
Scene Section Estimated Z [mm] Estimated C
MEAN STD MEAN STD
1 1 33.8287 0.0000 1112006.7 6757.98
2 33.7345 0.0000 1147150.0 70.7107
3 33.7206 0.0000 1077240.0 0.00000
4 33.4927 0.0110 1062390.0 37108.96
5 33.1182 0.0000 972636.00 10970.88
6 33.0097 0.0000 973917.50 7732.213
2 1 29.4382 0.0184 334133.67 4460.35
2 29.8657 0.0177 364612.3 8115.47
3 17.0222 0.0000 257711.0 2964.95
4 17.0097 0.0022 296949.0 5476.81
5 16.9246 0.0000 305874.0 8309.92
6 33.8346 0.3201 939564.0 93865.9
7 33.1624 0.0124 1152400 20774.8
8 16.5286 0.1686 318861.0 13696.66
3 1 34.5163 0.0031 692530.3 30117.66
2 34.2091 0.0068 1118625 10175.27
4 1 12.6820 3.6753 159821.6 85942.39
2 15.2671 0.0000 221748.5 2563.262
3 12.6662 3.6783 156090.3 85166.34
4 15.1144 0.0000 226365.0 3228.649
5 15.1002 0.0000 226979.0 3254.105
Fig. (27). Recovered shape in scene 1.
Fig. (28). Recovered shape in scene 2.
Fig. (29). Recovered shape in scene 3.
Fig. (30). Recovered shape in scene 4.

3.3.2. Results of Shape Recovery

Polyp shape recovery was performed using calculated Z and C. Here, C is estimated in order from the section where the continuous section of the same width region is long as well as Z with less STD is obtained. MEDIAN of C with the low value of STD is selected. Moreover, Smirnov-Grubbs test [108] was applied for outliers where the region has a potential of containing blood vessel except for intestinal-inner-wall. Figs. (27-30) shows the results of polyp shape recovery.

Here, some regions were cropped where shape recovery was interfered by a hood cover of the endoscope. From the recovered result, it can be confirmed that approximate polyp shape and size could be recovered using the calculated parameters Z and C.

CONCLUSION

This paper proposed a polyp shape recovery method which mitigated the constraint for applying SFS to the endoscope image using the medical suture. The proposed approach used two light sources endoscope according to the actual environment for polyp shape recovery. Necessary parameters for applying SFS were estimated. The camera inner parameters were obtained by camera calibration, depth Z and reflectance factor C were estimated by using the horizontal plane of the medical suture and its neighboring intestinal-inner-walls. Through experiments, it is confirmed that the approximate polyp model shape and size were recovered using medical suture information and obtained parameters from a single endoscope image. Notably, the proposed method realized polyp shape recovery from a single endoscope image without equalized reflectance factor with Lambertian processing. Applying the proposed method to extend to another reference object (e.g., blood vessel) remains as future work.

ETHICS APPROVAL AND CONSENT TO PARTICIPATE

Not applicable.

HUMAN AND ANIMAL RIGHTS

No Animals/Humans were used for studies that are basis of this research.

CONSENT FOR PUBLICATION

Not applicable.

CONFLICT OF INTEREST

The authors declare no conflict of interest, financial or otherwise.

ACKNOWLEDGEMENTS

This research is supported by JSPS Grant-in-Aid for Scientific Research (C) (17K00252) and Chubu University Grant.

REFERENCES

1
Rex DK, Mark D, Clarke B, Lappas JC, Lehman GA. Flexible sigmoidoscopy plus air-contrast barium enema versus colonoscopy for evaluation of symptomatic patients without evidence of bleeding. Gastrointest Endosc 1995; 42(2): 132-8.
2
Irvine EJ, O’Connor J, Frost RA, et al. Prospective comparison of double contrast barium enema plus flexible sigmoidoscopy vs colonoscopy in rectal bleeding: Barium enema vs colonoscopy in rectal bleeding. Gut 1988; 29(9): 1188-93.
3
Stock C, Brenner H. Utilization of lower gastrointestinal endoscopy and fecal occult blood test in 11 European countries: Evidence from the Survey of Health, Aging and Retirement in Europe (SHARE). Endoscopy 2010; 42(7): 546-56.
4
Sonnenberg A, Amorosi SL, Lacey MJ, Lieberman DA. Patterns of endoscopy in the united states: Analysis of data from the centers for medicare and medicaid services and the national endoscopic database. Gastrointest Endosc 2008; 67(3): 489-96.
5
Segnan N, Patnick J, Von Karsa L. European guidelines for quality assurance in colorectal cancer screening and diagnosis 2010.
6
van Rijn JC, Reitsma JB, Stoker J, Bossuyt PM, van Deventer SJ, Dekker E. Polyp miss rate determined by tandem colonoscopy: A systematic review. Am J Gastroenterol 2006; 101(2): 343-50.
7
Kumar S, Thosani N, Ladabaum U, et al. Adenoma miss rates associated with a 3-minute versus 6-minute colonoscopy withdrawal time: A prospective, randomized trial. Gastrointest Endosc 2017; 85(6): 1273-80.
8
Bond JH. Polyp guideline: Diagnosis, treatment, and surveillance for patients with nonfamilial colorectal polyps. Ann Intern Med 1993; 119(8): 836-43.
9
Force UPST. Guide to clinical preventive services: Report of the US preventive services task force 1989.
10
Winawer SJ, Fletcher RH, Miller L, et al. Colorectal cancer screening: Clinical guidelines and rationale. Gastroenterology 1997; 112(2): 594-642.
11
Byers T, Levin B, Rothenberger D, Dodd GD, Smith RA, Detection ACS. American cancer society guidelines for screening and surveillance for early detection of colorectal polyps and cancer: Update 1997. CA Cancer J Clin 1997; 47(3): 154-60.
12
Spratt JS Jr, Ackerman LV, Moyer CA. Relationship of polyps of the colon to colonic cancer. Ann Surg 1958; 148(4): 682-96.
13
Fenoglio-Preiser CM, Hutter RV. Colorectal polyps: Pathologic diagnosis and clinical significance. CA Cancer J Clin 1985; 35(6): 322-44.
14
Fenoglio CM, Pascal RR. Colorectal adenomas and cancer: Pathologic relationships. Cancer 1982; 50(11)(Suppl.): 2601-8.
15
Organization WH. International histological classification of tumours 1967; 6
16
O’Brien MJ, Winawer SJ, Zauber AG, et al. The National Polyp Study. Patient and polyp characteristics associated with high-grade dysplasia in colorectal adenomas. Gastroenterology 1990; 98(2): 371-9.
17
Fennerty MB, Davidson J, Emerson SS, Sampliner RE, Hixson LJ, Garewal HS. Are endoscopic measurements of colonic polyps reli-able? Am J Gastroenterol 1993; 88(4): 496-500.
18
Summers RM. Polyp size measurement at CT colonography: What do we know and what do we need to know? Radiology 2010; 255(3): 707-20.
19
Waye JD, Lewis BS, Frankel A, Geller SA. Small colon polyps. Am J Gastroenterol 1988; 83(2): 120-2.
20
Tedesco FJ, Hendrix JC, Pickens CA, Brady PG, Mills LR. Diminutive polyps: Histopathology, spatial distribution, and clinical significance. Gastrointest Endosc 1982; 28(1): 1-5.
21
Rex DK, Smith JJ, Ulbright TM, Lehman GA. Distal colonic hyperplastic polyps do not predict proximal adenomas in asymptomatic average-risk subjects. Gastroenterology 1992; 102(1): 317-9.
22
Provenzale D, Garrett JW, Condon SE, Sandler RS. Risk for colon adenomas in patients with rectosigmoid hyperplastic polyps. Ann Intern Med 1990; 113(10): 760-3.
23
Sciallero S, Costantini M, Bertinelli E, et al. Distal hyperplastic polyps do not predict proximal adenomas: Results from a multicentric study of colorectal adenomas. Gastrointest Endosc 1997; 46(2): 124-30.
24
Bond JH. Polyp guideline: Diagnosis, treatment, and surveillance for patients with colorectal polyps. Am J Gastroenterol 2000; 95(11): 3053-63.
25
Binmoeller KF, Bohnacker S, Seifert H, Thonke F, Valdeyar H, Soehendra N. Endoscopic snare excision of “giant” colorectal polyps. Gastrointest Endosc 1996; 43(3): 183-8.
26
Muto T, Kamiya J, Sawada T, et al. Small “flat adenoma” of the large bowel with special reference to its clinicopathologic features. Dis Colon Rectum 1985; 28(11): 847-51.
27
Doi K. Current status and future potential of computer-aided diagnosis in medical imaging. Br J Radiol 2005; 78(Spec No 1)(Suppl. 1): S3-S19.
28
Doi K. Computer-aided diagnosis in medical imaging: Historical review, current status and future potential. Comput Med Imaging Graph 2007; 31(4-5): 198-211.
29
Bernal J, Tajkbaksh N, Sanchez FJ, et al. Comparative validation of polyp detection methods in video colonoscopy: Results from the miccai 2015 endoscopic vision challenge. IEEE Trans Med Imaging 2017; 36(6): 1231-49.
30
Mori Y, Kudo SE, Wakamura K, et al. Novel computer-aided diagnostic system for colorectal lesions by using endocytoscopy (with videos). Gastrointest Endosc 2015; 81(3): 621-9.
31
Misawa M, Kudo SE, Mori Y, et al. Characterization of colorectal lesions using a computer-aided diagnostic system for narrow-band imaging endocytoscopy. Gastroenterology 2016; 150(7): 1531-1532.e3.
32
Chen PJ, Lin MC, Lai MJ, Lin JC, Lu HHS, Tseng VS. Accurate classification of diminutive colorectal polyps using computer-aided analysis. Gastroenterology 2018; 154(3): 568-75.
33
Nakayoshi T, Tajiri H, Matsuda K, Kaise M, Ikegami M, Sasaki H. Magnifying endoscopy combined with narrow band imaging system for early gastric cancer: Correlation of vascular pattern with histopathology (including video). Endoscopy 2004; 36(12): 1080-4.
34
Tischendorf JJ, Wasmuth HE, Koch A, Hecker H, Trautwein C, Winograd R. Value of magnifying chromoendoscopy and narrow band imaging (NBI) in classifying colorectal polyps: A prospective controlled study. Endoscopy 2007; 39(12): 1092-6.
35
Viana RLP, Iwahori Y, Funahashi K, Kasugai K. Automated polyp detection from endoscope images. Proceeding of SCIS-ISIS 2012.
36
Iwahori Y, Shinohara T, Hattori A, et al. Automatic polyp detection in endoscope images using a hessian filter. MVA 2013; 21-4.
37
Agrahari H, Iwahori Y, Bhuyan K, et al. Automatic polyp detection using dsc edge detector and hog features. Proceedings of the 3rd International Conference on Pattern Recognition Applications and Methods, SCITEPRESS-Science and Technology Publications, Lda 2014; 495-501.
38
Amber A, Iwahori Y, Bhuyan M, Woodham RJ, Kasugai K. Feature point based polyp tracking in endoscopic videos. Applied Computing and Information Technology/2nd International Conference on Computational Science and Intelligence (ACIT-CSI), 2015 3rd International Conference on, IEEE 2015; 299-964 .
39
Iwahori Y, Hattori A, Adachi Y, Bhuyan MK, Woodham RJ, Kasugai K. Automatic detection of polyp using hessian filter and hog features. Procedia Comput Sci 2015; 60: 730-9.
40
Iwahori Y, Hagi H, Usami H, et al. Automatic polyp detection from endoscope image using likelihood map based on edge information. ICPRAM 2017; 402-9.
41
Sasmal P, Iwahori Y, Bhuyan M, Kasugai K. Active contour segmentation of polyps in capsule endoscopic images. Signals and Systems (ICSigSys), 2018 International Conference on, IEEE 2018; 201-4.
42
Mori Y, Kudo SE, Misawa M, et al. Real-time use of artificial intelligence in identification of diminutive polyps during colonoscopy: A prospective study. Ann Intern Med 2018; 169(6): 357-66.
43
Huang A, Summers RM, Hara AK. Surface curvature estimation for automatic colonic polyp detection. Medical Imaging 2005: Physiology, Function, and Structure from Medical Images Volume 5746, International Society for Optics and Photonics 2005; 393-403.
44
Belongie S, Mori G, Malik J. Matching with shape contexts. Statistics and Analysis of Shapes 2006; 81-105.
45
van Wijk C, van Ravesteijn VF, Vos FM, van Vliet LJ. Detection and segmentation of colonic polyps on implicit isosurfaces by second principal curvature flow. IEEE Trans Med Imaging 2010; 29(3): 688-98.
46
Alexandre LA, Casteleiro J, Nobreinst N. Polyp detection in endoscopic video using svms. European Conference on Principles of Data Mining and Knowledge Discovery 2007; 358-65.
47
Zhou M, Bao G, Geng Y, Alkandari B, Li X. Polyp detection and radius measurement in small intestine using video capsule endoscopy. Biomedical Engineering and Informatics (BMEI), 2014 7th International Conference on, IEEE 2014; 237-41.
48
Magoulas GD, Plagianakos VP, Vrahatis MN. Neural network-based colonoscopic diagnosis using on-line learning and differential evolution. Appl Soft Comput 2004; 4(4): 369-79.
49
Horn BK. Shape from shading: A method for obtaining the shape of a smooth opaque object from one view 1970.
50
Zheng Q, Chellappa R. Estimation of illuminant direction, albedo, and shape from shading. Computer Vision and Pattern Recognition, 1991 Proceedings CVPR’91, IEEE Computer Society Conference on, IEEE 1991; 540-5.
51
Lee KM, Kuo CCJ. Shape from shading with a linear triangular element surface model. IEEE Trans Pattern Anal Mach Intell 1993; (8): 815-22.
52
Bichsel M, Pentland AP. A simple algorithm for shape from shading. Computer vision and pattern recognition, 1992 Proceedings CVPR’92 IEEE computer society conference 1992; 459-65.
53
Lee CH, Rosenfeld A. Improved methods of estimating shape from shading using the light source coordinate system. artificial. Intelligence 1985; 26(2): 125-43.
54
Pentland AP. Finding the illuminant direction. JOSA 1982; 72(4): 448-55.
55
Ping-Sing T, Shah M. Shape from shading using linear approximation. Image Vis Comput 1994; 12(8): 487-98.
56
Wheatstone C. Xviii. contributions to the physiology of vision.-part the first. on some remarkable, and hitherto unobserved, phenomena of binocular vision. Philos Trans R Soc Lond 1838; 128: 371-94.
57
Woodham R, Iwahori Y, Barman RA. Photometric stereo: Lambertian reflectance and light sources with unknown direction and strength University of British Columbia 1991.
58
Tomasi C, Kanade T. Shape and motion from image streams under orthography: A factorization method. Int J Comput Vis 1992; 9(2): 137-54.
59
Szeliski R, Kang SB. Recovering 3d shape and motion from image streams using nonlinear least squares. J Vis Commun Image Represent 1994; 5(1): 10-28.
60
Deka A, Iwahori Y, Bhuyan MK, Sasmal P, Kasugai K. Dense 3d reconstruction of endoscopic polyp. BIOIMAGING 2018; 159-66.
61
Aloimonos J. Shape from texture. Biol Cybern 1988; 58(5): 345-60.
62
Kanatani Ki, Chou TC. Shape from texture: General principle. Artif Intell 1989; 38(1): 1-48.
63
Blake A, Bülthoff HH, Sheinberg D. Shape from texture: Ideal obser-vers and human psychophysics. Vision Res 1993; 33(12): 1723-37.
64
Cavanagh P, Leclerc YG. Shape from shadows. J Exp Psychol Hum Percept Perform 1989; 15(1): 3-27.
65
Savarese S, Andreetto M, Rushmeier H, Bernardini F, Perona P. 3d reconstruction by shadow carving: Theory and practical evaluation. Int J Comput Vis 2007; 71(3): 305-36.
66
Blake A, Bülthoff H. Shape from specularities: Computation and psychophysics. Philos Trans R Soc Lond B Biol Sci 1991; 331(1260): 237-52.
67
Adato Y, Vasilyev Y, Ben-Shahar O, Zickler T. Toward a theory of shape from specular flow. Computer Vision, 2007 ICCV 2007 IEEE 11th International Conference on, IEEE 2007; 1-8.
68
Roth S, Black MJ. Specular flow and the recovery of surface structure. Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on Volume 2, IEEE 2006; 1869-76.
69
Cheung K, Baker S, Kanade T. Shape-from-silhouette of articulated objects and its use for human body kinematics estimation and motion capture. Computer Vision and Pattern Recognition Proceedings 2003 IEEE Computer Society Conference on IEEE 2003; 1
70
Cheung KM, Baker S, Kanade T. Shape-from-silhouette across time part i: Theory and algorithms. Int J Comput Vis 2005; 62(3): 221-47.
71
Baker S, Kanade T, et al. Shape-from-silhouette across time part ii: Applications to human modeling and markerless motion tracking. Int J Comput Vis 2005; 63(3): 225-45.
72
Pentland AP. A new sense for depth of field. IEEE Trans Pattern Anal Mach Intell 1987; 9(4): 523-31.
73
Pentland A, Scherock S, Darrell T, Girod B. Simple range cameras based on focal error. J Opt Soc Am A 1994; 11(11): 2925-34.
74
Nayar SK, Watanabe M, Noguchi M. Real-time focus range sensor. IEEE Trans Pattern Anal Mach Intell 1996; 18(12): 1186-98.
75
Favaro P, Soatto S. Learning shape from defocus. European Conference on Computer Vision 2002; 735-45.
76
Favaro P, Mennucci A, Soatto S. Observing shape from defocused images. Int J Comput Vis 2003; 52(1): 25-43.
77
Namboodiri VP, Chaudhuri S. On defocus, diffusion and depth estimation. Pattern Recognit Lett 2007; 28(3): 311-9.
78
Favaro P, Soatto S, Burger M, Osher SJ. Shape from defocus via diffusion. IEEE Trans Pattern Anal Mach Intell 2008; 30(3): 518-31.
79
Parot V, Lim D, González G, et al. Photometric stereo endoscopy. J Biomed Opt 2013; 18(7): 076017.
80
Hassanfiroozi A, Huang YP, Javidi B, Shieh HPD. Hexagonal liquid crystal lens array for 3D endoscopy. Opt Express 2015; 23(2): 971-81.
81
Le HN, Nguyen H, Wang Z, et al. An endoscopic 3d structured illumination imaging system for robotic anastomosis surgery (conference presentation). Advanced Biomedical and Clinical Diagnostic and Surgical Guidance Systems XVI International Society for Optics and Photonics 2018; 10484
82
Liu Q, Sclabassi RJ, Yao N, Sun M. 3d construction of endoscopic images based on computational stereo. Bioengineering Conference, 2006 Proceedings of the IEEE 32nd Annual Northeast, IEEE 2006; 69-70.
83
Noonan DP, Mountney P, Elson DS, Darzi A, Yang GZ. A stereoscopic fibroscope for camera motion and 3d depth recovery during minimally invasive surgery. Robotics and Automation, 2009 ICRA’09 IEEE International Conference on, IEEE 2009; 4463-8.
84
Koppel D, Chen CI, Wang YF, et al. Toward automated model building from video in computer-assisted diagnoses in colonoscopy. Medical Imaging: Visualization and Image-Guided Procedures International Society for Optics and Photonics 2007; 6509
85
Wu C, Narasimhan SG, Jaramaz B. A multi-image shape-from-shading framework for near-lighting perspective endoscopes. Int J Comput Vis 2010; 86(2-3): 211-28.
86
Deguchi K. Shape reconstruction from endoscope image by its shadings. Multisensor Fusion and Integration for Intelligent Systems, 1996 IEEE/SICE/RSJ International Conference on, IEEE 1996; 321-8.
87
Suter M, Reinhardt JM, Hoffman EA, McLennan G. 3d pulmonary airway color image reconstruction via shape from shading and virtual bronchoscopy imaging techniques. Medical Imaging 2005: Image Processing Volume 5747, International Society for Optics and Photonics 2005; 755-64.
88
Koulaouzidis A, Karargyris A. Three-dimensional image recons-truction in capsule endoscopy. World J Gastroenterol 2012; 18(31): 4086-90.
89
Koulaouzidis A, Karargyris A. Use of enhancement algorithm to suppress reflections in 3-D reconstructed capsule endoscopy images. World J Gastrointest Endosc 2013; 5(9): 465-7.
90
Iwahori Y, Tatematsu K, Nakamura T, Fukui S, Woodham RJ, Kasugai K. 3d shape recovery from endoscope image based on both photometric and geometric constraints. Knowledge-Based Infor-mation Systems in Practice 2015; 65-80.
91
Iwahori Y, Tsuda S, Woodham RJ, Bhuyan MK, Kasugai K. Modification of polyp size and shape from two endoscope images using rbf neural network. International Conference on Pattern Recognition Applications and Methods 2015; 229-46.
92
Tsuda S, Iwahori Y, Hanai Y, Woodham RJ, Bhuyan MK, Kasugai K. Recovering size and shape of polyp from endoscope image by rbf-nn modification. Image Processing (ICIP), 2015 IEEE International Conference on, IEEE 2015; 4684-8.
93
Iwahori Y, Yamaguchi D, Nakamura T, Kijsirikul B, Bhuyan MK, Kasugai K. Estimating reflectance parameter of polyp using medical suture information in endoscope image. Proceedings of the 5th International Conference on Pattern Recognition Applications and Methods, SCITEPRESS-Science and Technology Publications, Lda 2016; 503-9.
94
Usami H, Hanai Y, Iwahori Y, Kasugai K. 3d shape recovery of polyp using two light sources endoscope. Computer and Information Science (ICIS), 2016 IEEE/ACIS 15th International Conference on, IEEE 2016; 1-6.
95
Iwahori Y, Suda T, Funahashi K, et al. Shape recovery of polyp from endoscope image using blood vessel information. International Conference on Computational Science/Intelligence & Applied Informatics 2017; 165-84.
96
Dixit PK, Iwahori Y, Bhuyan M, Kasugai K, Vishwakarma A. Polyp shape estimation from endoscopy video using ekf monocular slam with sfs model prior. Wireless Communications, Signal Processing and Networking (WiSPNET), 2017 International Conference on, IEEE 2017; 52-7.
97
Usami H, Iwahori Y, Ogasawara N, Kasugai K, Adachi Y. Polyp shape recovery based on blood vessel structure analysis. Procedia Comput Sci 2017; 112: 1793-800.
98
Zhang Z. A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell 2000; 22(11): 1330-4.
99
Heikkila J, Silv́en O. A four-step camera calibration procedure with implicit image correction. Computer Vision and Pattern Recognition, 1997 Proceedings, 1997 IEEE Computer Society Conference on, IEEE 1997; 1106-12.
100
Shafer SA. Using color to separate reflection components. Color Res Appl 1985; 10(4): 210-8.
101
Tchoulack S, Langlois JP, Cheriet F. A video stream processor for real-time detection and correction of specular reflections in endoscopic images. Circuits and Systems and TAISA Conference, 2008 NEWCAS-TAISA 2008 2008 Joint 6th International IEEE Northeast Workshop on, IEEE 2008; 49-52.
102
Yang Q, Wang S, Ahuja N. Real-time specular highlight removal using bilateral filtering. European conference on computer vision 2010; 87-100.
103
Yang Q, Tang J, Ahuja N. Efficient and robust specular highlight removal. IEEE Trans Pattern Anal Mach Intell 2015; 37(6): 1304-11.
104
Schluns K, Koschan A. Global and local highlight analysis in color images. Proc 1st Int Conf Color Graphics Image Processing 2000; 300-4.
105
Iwahori Y, Nakagawa T, Woodham RJ, Fukui S, Kawanaka H. Shape from self-calibration and fast marching method. Pattern Recognition, 2008 ICPR 2008 19th International Conference on, IEEE 2008; 1-4.
106
Shimasaki Y, Iwahori Y, Neog DR, Woodham RJ, Bhuyan M. Generating lambertian image with uniform reflectance for endoscope image. IWAIT2013 2013; 1-6.
107
Fischler MA, Bolles RC. A paradigm for model fitting with applications to image analysis and automated cartography (reprinted in readings in computer vision, ed. ma fischler,). Comm ACM 1981; 24(6): 381-95.
108
Grubbs FE, et al. Sample criteria for testing outlying observations. Ann Math Stat 1950; 21(1): 27-58.