The Costes method for
[[http://en.wikipedia.org/wiki/Statistical_significance Statistical Significance ]] relies on the spatial calibration of the image, knowledge of the [[http://en.wikipedia.org/wiki/Numerical_aperture Numerical Aperture (N.A.) ]] of the objective lens, and the fluorescence emission wavelength to calculate how many pixels the [http://en.wikipedia.org/wiki/Point_spread_function point spread function] covers in the image. Then it takes the image in one of the channels, and randomizes it by moving PSF sized chunks of the image to random locations in a new random test image. Then it calculates the [[http://en.wikipedia.org/wiki/Correlation Pearson's correlation coefficient (r)]] between the randomized image and the original image of the other channel. If the correlation of the randomized image with the real image of the other channel is as good as or better then the correlation between the two real images, then any correlation that you measure is no better then what you would have got by chance for this image. This test is performed many (100) times, and the P-value is output, which is the proportion of random images that had better correlation than the real image. A P-value of 1.00 means that none of the randomised images had better correlation. 0.95 is the normal statistical confidence limit of 95%. Anything lower than that, and the correlation / colocalisation that you measure in the real images is not likely to be better than random chance, and thus is probably not interesting.
In this case the P-value should be 1.00. Since you told it to display the Pearson's correlation (r) values (R values here), they are in another window. You can see they are all close to zero, and in the results you can see that on average the randomized R value is about zero, meaning that the randomized images all had no correlation with the real image. Which is a good thing!